Tutorial 2: Delay Based Effects

Tutorial 2: Delay Based Effects

In this section, we define a function to create echo effect on input audio. Echo can be modeled as attenuated, delayed copies of the original signal added to itself. Here, it can be seen as an FIR filtering operation. The FIR filter we constructed is of form [1, delayed_sec*Fs-1, a], where delayed_sec is the number of seconds for the echo to arrive and a: 0<a<=1.

We can also make the amount of delay a function of time to make more interesting effects, such as flanger. The following function implements the flanger effect using a triangular function for time-varying delay, shown in the figure below..

Source code and test audio file: tutorial2

Recent Posts

Deformable Microphone Arrays

This post describes our paper “Motion-Robust Beamforming for Deformable Microphone Arrays,” which won the best student paper award at WASPAA 2019.

When our team designs wearable microphone arrays, we usually test them on our beloved mannequin test subject, Mike A. Ray. With Mike’s help, we’ve shown that large wearable microphone arrays can perform much better than conventional earpieces and headsets for augmented listening applications, such as noise reduction in hearing aids. Mannequin experiments are useful because, unlike a human, Mike doesn’t need to be paid, doesn’t need to sign any paperwork, and doesn’t mind having things duct-taped to his head. There is one major difference between mannequin and human subjects, however: humans move. In our recent paper at WASPAA 2019, which won a best student paper award, we described the effects of this motion on microphone arrays and proposed several ways to address it.

Beamformers, which use spatial information to separate and enhance sounds from different directions, rely on precise distances between microphones. (We don’t actually measure those distances directly; we measure relative time delays between signals at the different microphones, which depend on distances.) When a human user turns their head – as humans do constantly and subconsciously while listening – the microphones near the ears move relative to the microphones on the lower body. The distances between microphones therefore change frequently.

In a deformable microphone array, microphones can move relative to each other.

Microphone array researchers have studied motion before, but it is usually the sound source that moves relative to the entire array. For example, a talker might walk around the room. That problem, while challenging, is easier to deal with: we just need to track the direction of the user. Deformation of the array itself – that is, relative motion between microphones – is more difficult because there are more moving parts and the changing shape of the array has complicated effects on the signals. In this paper, we mathematically analyzed the effects of deformation on beamformer performance and considered several ways to compensate for it.

Continue reading

  1. EchoXL Leave a reply
  2. Cooperative Listening Devices Leave a reply
  3. Massive Distributed Microphone Array Dataset Leave a reply
  4. Studio-Quality Recording Devices for Smart Home Data Collection Leave a reply
  5. Sound Source Localization Leave a reply
  6. Augmented Listening at Engineering Open House 2019 Leave a reply
  7. Capturing Data From a Wearable Microphone Array Leave a reply
  8. Talking Heads Leave a reply
  9. How loud is my audio device? : Thinking about safe listening through the new WHO-ITU Standard Leave a reply