Tutorial 3: Amplitude Clipping Effects

Tutorial 3: Amplitude Clipping Effects

In this section, I implemented a function to model clipping effect. I modeled two types of clipping functions. Distortion function simply multiplies the input signal by the specified amplification factor and saturates all the samples above a certain threshold and its transfer function has a sharp edge at the threshold.

 

 

 

 

 

 

 

 

 

 

 

Overdrive function uses a softclipping approach where the output amplitude approaches an asymtototic value as input amplitude increases.

 

 

 

 

 

 

 

 

 

 

Source Code and sample Audio: tutorial3

Recent Posts

Audio Source Remixing

This post describes our paper “Binaural Audio Source Remixing for Microphone Array Listening Devices” presented at ICASSP 2020. You can read the paper here or watch the video presentation here.

Microphone arrays are important tools for spatial sound processing. Traditionally, most methods for spatial sound capture can be classified as either beamforming, which tries to isolate a sound coming from a single direction, or source separation, which tries to split a recording of several sounds into its component parts. Beamforming and source separation are useful in crowded, noisy environments with many sound sources, and are widely used in speech recognition systems and teleconferencing systems.

Since microphone arrays are so useful in noisy environments, we would expect them to work well in hearing aids and other augmented listening applications. Researchers have been building microphone-array hearing aids for more than 30 years, and laboratory experiments have consistently shown that they can reduce noise and improve intelligibility, but there has never been a commercially successful listening device with a powerful microphone array. Why not?

The problem may be that most researchers have approached listening devices as if they were speech recognition or teleconferencing systems, designing beamformers that try to isolate a single sound and remove all the others. They promise to let the listener hear the person across from them in a crowded restaurant and silence everyone else. But unlike computers, humans are used to hearing multiple sounds at once, and our brains can do a good job separating sound sources on their own. Imagine seeing everyone’s lips move but not hearing any sound! If a listening device tries to focus on only one sound, it can seem unnatural to the listener and introduce distortion that makes it harder, not easier, to hear.

A source remixing system changes the relative levels of sounds in a mixture while preserving their spatial cues.

This paper proposes a new type of array processing for listening devices: source remixing. Instead of trying to isolate or separate sound sources, the system tries to change their relative levels in a way that sounds natural to the listener. In a good remixing system, it will seem as if real-world sounds are louder or quieter than before.

Continue reading

  1. Deformable Microphone Arrays Leave a reply
  2. EchoXL Leave a reply
  3. Cooperative Listening Devices Leave a reply
  4. Massive Distributed Microphone Array Dataset Leave a reply
  5. Studio-Quality Recording Devices for Smart Home Data Collection Leave a reply
  6. Sound Source Localization Leave a reply
  7. Augmented Listening at Engineering Open House 2019 Leave a reply
  8. Capturing Data From a Wearable Microphone Array Leave a reply
  9. Talking Heads Leave a reply