Bandwidth extension with air and body-conduction microphones for speech enhancement

This work will be presented at the 184th Meeting of the Acoustical Society of America, May 2023, in Chicago, Illinois.

Conventional microphones can be referred to as air-conduction mics (ACMs), because they capture sound that propagates through the air. ACMs can record wideband audio, but will capture sounds from undesired sources in noisy scenarios.

In contrast, bone-conduction mics (BCMs) are worn directly on a person to detect sounds propagating through the body. While this can isolate the wearer’s speech, it also severely degrades the quality. We can model this degradation as a low-pass filter.

To enhance a target talker’s speech, we can use the BCM for a noise-robust speech estimate, and combine it with ACM audio by applying a ratio mask in the time-frequency domain

 

Factorization methods outperform parametric methods for BWE of female (left) and male (right) speech. The ensemble systems (solid) can significantly outperform baseline systems (striped)

However, the BCM only provides good estimates of the lower frequencies. Therefore, we need to estimate the missing upper frequencies. This task is called bandwidth extension (BWE), and can be solved in a variety of ways.

We found that ensemble factorization approaches can significantly outperform other low-compute BWE methods.

 

The ensemble factorization method uses two expert models for voiced and unvoiced speech segments

We provide listening examples of our proposed ensemble system. The audio data was generated from a simulated indoor, multi-talker scene.

  Female Male
BCM
ACM
Baseline
Proposed

Enhancing Group Conversations with Smartphones and Hearing Devices

This post describes our paper “Adaptive Crosstalk Cancellation and Spatialization for Dynamic Group Conversation Enhancement Using Mobile and Wearable Devices,” presented at the International Workshop on Acoustic Signal Enhancement (IWAENC) in September 2022.

One of the most common complaints from people with hearing loss – and everyone else, really – is that it’s hard to hear in noisy places like restaurants. Group conversations are especially difficult since the listener needs to keep track of multiple people who sometimes interrupt or talk over each other. Conventional hearing aids and other listening devices don’t work well for noisy group conversations. Our team at the Illinois Augmented Listening Laboratory is developing systems to help people hear better in group conversations by connecting hearing devices with other nearby devices. Previously, we showed how wireless remote microphone systems can be improved to support group conversations and how a microphone array can enhance talkers in the group while removing outside noise. But both of those approaches rely on specialized hardware, which isn’t always practical. What if we could build a system using devices that users already have with them?

We can connect together hearing devices and smartphones to enhance speech from group members and remove unwanted background noise.

In this work, we enhance a group conversation by connecting together the hearing devices and mobile phones of everyone in the group. Each user wears a pair of earpieces – which could be hearing aids, “hearables”, or wireless earbuds – and places their mobile phone on the table in front of them. The earpieces and phones all transmit audio data to each other, and we use adaptive signal processing to generate an individualized sound mixture for each user. We want each user to be able to hear every other user in the group, but not background noise from other people talking nearby. We also want to remove echoes of the user’s own voice, which can be distracting. And as always, we want to preserve spatial cues that help users tell which direction sound is coming from. Those spatial cues are especially important for group conversations where multiple people might talk at once.

Continue reading

Turning the Music Down with Wireless Assistive Listening Systems

This post accompanies our presentation “Turn the music down! Repurposing assistive listening broadcast systems to remove nuisance sounds” from the Acoustical Society of America meeting in May 2022.

It is often difficult to hear over loud music in a bar or restaurant. What if we could remove the annoying music while hearing everything else? With the magic of adaptive signal processing, we can!

To do that, we’ll use a wireless assistive listening system (ALS). An ALS is usually used to enhance sound in theaters, places of worship, and other venues with sound systems. It transmits the sound coming over the speakers directly to the user’s hearing device, making it louder and cutting through noise and reverberation. Common types of ALS include infrared (IR) or frequency modulation (FM) transmitters, which work with dedicated headsets, and induction loops, which work with telecoils built into hearing devices.

We can instead use those same systems to cancel the sound at the ears while preserving everything else. We use an adaptive filter to predict the music as heard at the listener’s ears, then subtract it out. What’s left over is all the other sound in the room, including the correct spatial cues. The challenge is adapting as the listener moves.

The video below demonstrates the system using a high-end FM wireless system. The dummy head wears a set of microphones that simulate a hearing device; you’ll be hearing through its ears. The FM system broadcasts the same sound being played over the speakers. An adaptive filter cancels it so you can hear my voice but not the music.

Group Conversation Enhancement

This post accompanies two presentations titled “Immersive Conversation Enhancement Using Binaural Hearing Aids and External Microphone Arrays” and “Group Conversation Enhancement Using Wireless Microphones and the Tympan Open-Source Hearing Platform”, which were presented at the International Hearing Aid Research Conference (IHCON) in August 2022. The latter is part of a special session on open-source hearing tools.

Have you ever struggled to hear the people across from you in a crowded restaurant? Group conversations in noisy environments are among the most frustrating hearing challenges, especially for people with hearing loss, but conventional hearing devices don’t do much to help. They make everything louder, including the background noise. Our research group is developing new methods to make it easier to hear in loud noise. In this project, we focus on group conversations, where there are several users who all want to hear each other.

Conversation enhancement allows users within a group to hear each other while tuning out background noise.

A group conversation enhancement system should turn up the voices of users in the group while tuning out background noise, including speech from other people nearby. To do that, it needs to separate the speech of group members from that of non-members. It should handle multiple talkers at once, in case people interrupt or talk over each other. To help listeners keep track of fast-paced conversations, it should sound as immersive as possible. Specifically, it should have imperceptible delay and it should preserve spatial cues so that listeners can tell what sound is coming from what direction. And it has to do all that while all the users are constantly moving, such as turning to look at each other while talking.

Continue reading

Immersive Remote Microphone System on the Tympan Platform

This post accompanies our presentation “Immersive multitalker remote microphone system” at the 181st Acoustical Society of America Meeting in Seattle.

In our previous post, which accompanied a paper at WASPAA 2021, we proposed an improved wireless microphone system for hearing aids and other listening devices. Unlike conventional remote microphones, the proposed system works with multiple talkers at once, and it uses earpiece microphones to preserve the spatial cues that humans use to localize and separate sound. In that paper, we successfully demonstrated the adaptive filtering system in an offline laboratory experiment.

To see if it would work in a real-time, real-world listening system, we participated in an Acoustical Society of America hackathon using the open-source Tympan platform. The Tympan is an Arduino-based hearing aid development kit. It comes with high-quality audio hardware, a built-in rechargeable battery, a user-friendly Android app, a memory card for recording, and a comprehensive modular software library. Using the Tympan, we were able to quickly demonstrate our adaptive binaural filtering system in real hardware.

The Tympan processor connects to a stereo wireless microphone system and binaural earbuds.

Continue reading

Improving remote microphones for group conversations

This post accompanies the paper “Adaptive Binaural Filtering for a Multiple-Talker Listening System Using Remote and On-Ear Microphones” presented at WASPAA 2021 (PDF).

Wireless assistive listening technology

Hearing aids and other listening devices can help people to hear better by amplifying quiet sounds. But amplification alone is not enough in loud environments like restaurants, where the sound from a conversation partner is buried in background noise, or when the talker is far away, like in a large classroom or a theater. To make sound easier to understand, we need to bring the sound source closer to the listener. While we often cannot physically move the talker, we can do the next best thing by placing a microphone on them.

A remote microphone transmits sound from a talker to the listener's hearing device.

Remote microphones make it easier to hear by transmitting sound directly from a talker to a listener. Conventional remote microphones only work with one talker at a time.

When a remote microphone is placed on or close to a talker, it captures speech with lower noise than the microphones built into hearing aid earpieces. The sound also has less reverberation since it does not bounce around the room before reaching the listener. In clinical studies, remote microphones have been shown to consistently improve speech understanding in noisy environments. In our interviews of hearing technology users, we found that people who use remote microphones love them – but with the exception of K-12 schools, where remote microphones are often legally required accommodations, very few people bother to use them.

Continue reading

Dynamic Range Compression and Noise

This post accompanies our presentation “Dynamic Range Compression of Sound Mixtures” at the 2020 Acoustical Society of America meeting and our paper “Modeling the effects of dynamic range compression on signals in noise” in the Journal of the Acoustical Society of America (PDF).

Nearly every modern hearing aid uses an algorithm called dynamic range compression (DRC), which automatically adjusts the amplification of the hearing aid to make quiet sounds louder and loud sounds quieter. Although compression is one of the most important features of hearing aids, it might also be one of the reasons that they work so poorly in noisy environments. Hearing researchers have long known that when DRC is applied to multiple sounds at once, it can cause distortion and make background noise worse. Our research team is applying signal processing theory to understand why compression works poorly in noise and exploring new strategies for controlling loudness in noisy environments.

Continue reading

Source Separation using a Massive Number of Microphones

This post describes the results of using simple delay-and-sum beamforming for source separation with the massive distributed microphone array dataset by Ryan Corey, Matt Skarha, and Professor Andrew Singer.

Although source separation (separating distinct and overlapping sound sources from each other and from dispersed noise) in a small, quiet lab with only a few speakers usually produces excellent results, such a situation may not always be present. In a large reverberant room with many speakers, for example, it may be difficult for a person or speech recognition system to keep track of and to comprehend what one particular speaker is saying. But using source separation in such a scenario to improve intelligibility is quite difficult without having external information that in itself may also be difficult to obtain.

Many source separation methods work well up to only a certain number of speakers – typically not much more than four or five. Moreover, some of these methods rely on constraining the number of microphones to an amount equal to the number of sources, and will scale poorly in terms of results, if at all, with the addition of more microphones. Limiting the number of microphones to the number of speakers will not work in these difficult scenarios, but adding more microphones may help, due to the greater amount of spatial information made available by the additional microphones. Therefore, the motivation behind this experiment was to find a suitable source separation method, or series of such methods, that could leverage the “massive” number of microphones used in the Massive Distributed Microphone Array Dataset and solve the particularly challenging problem of separating ten speech sources. Ideally, the algorithm would rely on as little external information as possible, instead relying on the wealth of information gathered by the microphone arrays distributed around the conference room. Thus, the delay-and-sum beamformer was first considered for this task, because it only requires the locations of each source and microphone, and it inherently scales well with a large number of microphones.

The Dataset

Layout of the conference room

Shown above is a diagram depicting the setup for the Massive Distributed Microphone Array Dataset. Of note is the fact that there are two distinct types of arrays – wearable arrays, denoted by the letter W and numbered from 1-4, and tabletop arrays, denoted by the letter T and numbered from 1-12. Wearable arrays have 16 microphones each, whereas tabletop arrays have 8.

Continue reading

Face masks make it harder to hear, but amplification can help

This post describes our recent paper “Acoustic effects of medical, cloth, and transparent face masks on speech signals” in The Journal of the Acoustical Society of America. This work was also discussed in The Hearing Journal and presented at the 179th Acoustical Society of America meeting.

Face masks are a critical tool in slowing the spread of COVID-19, but they also make communication more difficult, especially for people with hearing loss. Face masks muffle high-frequency speech sounds and block visual cues. Masks may be especially frustrating for teachers, who will be expected to wear them while teaching in person this fall. Fortunately, not all masks affect sound in the same way, and some masks are better than others. Our research team measured several face masks in the Illinois Augmented Listening Laboratory to find out which are the best for sound transmission, and to see whether amplification technology can help.

Several months into the pandemic, we now have access to a wide variety of face masks, including disposable medical masks and washable cloth masks in different shapes and fabrics. A recent trend is cloth masks with clear windows, which let listeners see the talker’s lips and facial expressions. In this study, we tested a surgical mask, N95 and KN95 respirators, several types of opaque cloth mask, two cloth masks with clear windows, and a plastic face shield.

Face masks ranked by high-frequency attenuation

We measured the masks in two ways. First, we used a head-shaped loudspeaker designed by recent Industrial Design graduate Uriah Jones to measure sound transmission through the masks. A microphone was placed at head height about six feet away, to simulate a “social-distancing” listener. The loudspeaker was rotated on a turntable to measure the directional effects of the mask. Second, we recorded a human talker wearing each mask, which provides more realistic but less consistent data. The human talker wore extra microphones on his lapel, cheek, forehead, and in front of his mouth to test the effects of masks on sound capture systems.

Continue reading