Bandwidth extension with air and body-conduction microphones for speech enhancement

This work will be presented at the 184th Meeting of the Acoustical Society of America, May 2023, in Chicago, Illinois.

Conventional microphones can be referred to as air-conduction mics (ACMs), because they capture sound that propagates through the air. ACMs can record wideband audio, but will capture sounds from undesired sources in noisy scenarios.

In contrast, bone-conduction mics (BCMs) are worn directly on a person to detect sounds propagating through the body. While this can isolate the wearer’s speech, it also severely degrades the quality. We can model this degradation as a low-pass filter.

To enhance a target talker’s speech, we can use the BCM for a noise-robust speech estimate, and combine it with ACM audio by applying a ratio mask in the time-frequency domain

 

Factorization methods outperform parametric methods for BWE of female (left) and male (right) speech. The ensemble systems (solid) can significantly outperform baseline systems (striped)

However, the BCM only provides good estimates of the lower frequencies. Therefore, we need to estimate the missing upper frequencies. This task is called bandwidth extension (BWE), and can be solved in a variety of ways.

We found that ensemble factorization approaches can significantly outperform other low-compute BWE methods.

 

The ensemble factorization method uses two expert models for voiced and unvoiced speech segments

We provide listening examples of our proposed ensemble system. The audio data was generated from a simulated indoor, multi-talker scene.

  Female Male
BCM
ACM
Baseline
Proposed

Investigating sample bias towards languages in audio super-resolution

This work was presented at the 2023 Undergraduate Research Symposium, held by the University of Illinois Urbana-Champaign (Poster 61)

Speech audio sounds good when sampled at 16 kHz, however legacy infrastructure and certain microphones can only capture 8 kHz audio. This can significantly reduce the perceived clarity and intelligibility of speech.

Deep learning provides a way to estimate the lost frequency components, thereby improving quality. This task is called audio super-resolution (or bandwidth extension).

Typically, large datasets of clean audio are required to train such models. However, these datasets do not sufficiently represent all languages, and it is not always possible to train language-specific models for any given language. We investigate how a model trained only on high-quality English recordings generalizes to lower-quality recordings of unseen languages.

The specifics of our model are discussed in our poster at the Undergraduate Research Symposium. Here, we present some results and audio.

We find that our model generalizes well to certain languages, but not others. We provide example audio in the table below. Languages are listed by the attained model accuracy, with English being the most accurate and Catalan being the least so.

Target (16 kHz)Input (8 kHz)Output (16 kHz)
English
Korean
Twi
German
Nepali
Esperanto
Catalan

We conjecture that the variance in performance is correlated with the linguistic similarity between the trained English and inference languages. We reserve this analysis for future work.

*Target audio samples are from the Common Voices Corpus, which contains recordings of over 100 languages

Simulating group conversations with talking heads

Featured

This work was presented at the 184th Meeting of the Acoustical Society of America, May 2023, in Chicago, Illinois.

This project is part of the larger Mechatronic Acoustic Research System, a tool for roboticized, automatic audio data collection.

In group conversations, a listener will hear speech coming from many directions of arrival. Human listeners can discern where a particular sound is coming from based on the difference in volume and timing of sound at their left and right ears: these are referred to in the literature as interaural level and time differences.

Diagram of interaural effects

While the brain automatically performs this localization, computers must rely on algorithms. Developing algorithms that are sufficiently accurate, quick, and robust is the work of acoustical signal processing researchers. To do so, researchers need datasets of spatial audio that mimic what is sensed by the ears of a real listener.

Acoustic head simulators provide a solution for generating such datasets. These simulators are designed to have similar absorptive and structural properties as a real human head, and unlike real humans, can be stationed in a lab 24/7 and actuated for precise and repeatable motion.

Head and torso simulators (HATS) from Bruel & Kjaer, an HBK company.

However, research-grade acoustic head simulators can be prohibitively expensive. To achieve high levels of realism, expensive materials and actuators are used, which raises typical prices into the range of tens of thousands of dollars. As such, very few labs will have access to multiple head simulators, which is necessary for simulating group conversations.

We investigate the application of 3D printing technology to the fabrication of head simulators. In recent years, 3D printing has become a cheap and accessible means of producing highly complicated structures. This makes it uniquely suited to the complex geometry of the human ears and head, both of which significantly affect interaural levels and delay.

Exploded-view render of head simulators, produced by Zhihao Tang for TE401F in 2021

Prototype 3D printed ears, which affect the binaural cues

To allow for movement of each individual head, we also design a multi-axial turret that the head can lock onto to. This lets the simulators nod and turn, mimicking natural gestures. Researchers can use this feature to evaluate the robustness and responsiveness of their algorithms to spatial perturbations.

3D printed head simulator mounted on a multiaxial turret for motion.

By designing a 3D printable, actuated head simulator, we aim to enable anyone to fabricate many such devices for their own research.

 

Motion and Audio, with Robots

This post describes our paper “Mechatronic Generation of Datasets for Acoustics Research,” presented at the International Workshop on Acoustic Signal Enhancement (IWAENC) in September 2022.

Creating datasets is expensive, be it in terms of time or funding. This is especially true for spatial audio: Some applications require that hundreds of recordings are taken from specific regions in a room, while others involve arranging many microphones and loudspeakers to mimic real-life scenarios – for instance, a conference. Few researchers have access to dedicated recording spaces that can accurately portray acoustically-interesting environments, and fewer still are able to create dynamic scenes where microphones and speakers move precisely to replicate how people walk and talk.

To support the creation of these types of datasets, we propose the Mechatronic Acoustic Research System, or MARS for short. We envision MARS as a robot-enabled recording space that researchers would have remote access to. Users could emulate a wide variety of acoustic environments and take recordings with little effort. Our initial concept is for a website design interface that can be used to specify a complicated experiment, which a robot system then automatically recreates.

Diagram of MARS

How the MARS frontend and backend link together

Continue reading