Tutorial 3: Amplitude Clipping Effects

Tutorial 3: Amplitude Clipping Effects

In this section, I implemented a function to model clipping effect. I modeled two types of clipping functions. Distortion function simply multiplies the input signal by the specified amplification factor and saturates all the samples above a certain threshold and its transfer function has a sharp edge at the threshold.

 

 

 

 

 

 

 

 

 

 

 

Overdrive function uses a softclipping approach where the output amplitude approaches an asymtototic value as input amplitude increases.

 

 

 

 

 

 

 

 

 

 

Source Code and sample Audio: tutorial3

Recent Posts

Deformable Microphone Arrays

This post describes our paper “Motion-Robust Beamforming for Deformable Microphone Arrays,” which won the best student paper award at WASPAA 2019.

When our team designs wearable microphone arrays, we usually test them on our beloved mannequin test subject, Mike A. Ray. With Mike’s help, we’ve shown that large wearable microphone arrays can perform much better than conventional earpieces and headsets for augmented listening applications, such as noise reduction in hearing aids. Mannequin experiments are useful because, unlike a human, Mike doesn’t need to be paid, doesn’t need to sign any paperwork, and doesn’t mind having things duct-taped to his head. There is one major difference between mannequin and human subjects, however: humans move. In our recent paper at WASPAA 2019, which won a best student paper award, we described the effects of this motion on microphone arrays and proposed several ways to address it.

Beamformers, which use spatial information to separate and enhance sounds from different directions, rely on precise distances between microphones. (We don’t actually measure those distances directly; we measure relative time delays between signals at the different microphones, which depend on distances.) When a human user turns their head – as humans do constantly and subconsciously while listening – the microphones near the ears move relative to the microphones on the lower body. The distances between microphones therefore change frequently.

In a deformable microphone array, microphones can move relative to each other.

Microphone array researchers have studied motion before, but it is usually the sound source that moves relative to the entire array. For example, a talker might walk around the room. That problem, while challenging, is easier to deal with: we just need to track the direction of the user. Deformation of the array itself – that is, relative motion between microphones – is more difficult because there are more moving parts and the changing shape of the array has complicated effects on the signals. In this paper, we mathematically analyzed the effects of deformation on beamformer performance and considered several ways to compensate for it.

Continue reading

  1. EchoXL Leave a reply
  2. Cooperative Listening Devices Leave a reply
  3. Massive Distributed Microphone Array Dataset Leave a reply
  4. Studio-Quality Recording Devices for Smart Home Data Collection Leave a reply
  5. Sound Source Localization Leave a reply
  6. Augmented Listening at Engineering Open House 2019 Leave a reply
  7. Capturing Data From a Wearable Microphone Array Leave a reply
  8. Talking Heads Leave a reply
  9. How loud is my audio device? : Thinking about safe listening through the new WHO-ITU Standard Leave a reply