I have launched the new Listening Technology Lab website for my research group at UIC and DPI.
This website will no longer be updated.
I have launched the new Listening Technology Lab website for my research group at UIC and DPI.
This website will no longer be updated.
I am delighted to announce that I have accepted a position as an assistant professor of Electrical and Computer Engineering at the University of Illinois Chicago! It is also a first-of-its-kind dual appointment with the Discovery Partners Institute, a new University of Illinois innovation hub in Chicago dedicated to equitable economic development.
I look forward to expanding my research program on assistive and augmentative listening technology. As I establish my independent research career, I hope to build new collaborations with audiologists and hearing scientists to investigate the clinical applications of advanced audio technology. I will also collaborate with other scientists in the Applied Research and Development group at DPI on broader problems in human-centric computing, sensor networks, robotics, privacy, and accessible technology.
I am currently seeking a few talented and passionate PhD students to help launch my new research group at UIC and DPI. If you are interested, please get in touch!
Over the next few weeks, I will be transitioning this website to a new UIC laboratory page.
This post describes our paper “Adaptive Crosstalk Cancellation and Spatialization for Dynamic Group Conversation Enhancement Using Mobile and Wearable Devices,” presented at the International Workshop on Acoustic Signal Enhancement (IWAENC) in September 2022.
One of the most common complaints from people with hearing loss – and everyone else, really – is that it’s hard to hear in noisy places like restaurants. Group conversations are especially difficult since the listener needs to keep track of multiple people who sometimes interrupt or talk over each other. Conventional hearing aids and other listening devices don’t work well for noisy group conversations. Our team at the Illinois Augmented Listening Laboratory is developing systems to help people hear better in group conversations by connecting hearing devices with other nearby devices. Previously, we showed how wireless remote microphone systems can be improved to support group conversations and how a microphone array can enhance talkers in the group while removing outside noise. But both of those approaches rely on specialized hardware, which isn’t always practical. What if we could build a system using devices that users already have with them?
In this work, we enhance a group conversation by connecting together the hearing devices and mobile phones of everyone in the group. Each user wears a pair of earpieces – which could be hearing aids, “hearables”, or wireless earbuds – and places their mobile phone on the table in front of them. The earpieces and phones all transmit audio data to each other, and we use adaptive signal processing to generate an individualized sound mixture for each user. We want each user to be able to hear every other user in the group, but not background noise from other people talking nearby. We also want to remove echoes of the user’s own voice, which can be distracting. And as always, we want to preserve spatial cues that help users tell which direction sound is coming from. Those spatial cues are especially important for group conversations where multiple people might talk at once.
This post accompanies our presentation “Turn the music down! Repurposing assistive listening broadcast systems to remove nuisance sounds” from the Acoustical Society of America meeting in May 2022.
It is often difficult to hear over loud music in a bar or restaurant. What if we could remove the annoying music while hearing everything else? With the magic of adaptive signal processing, we can!
To do that, we’ll use a wireless assistive listening system (ALS). An ALS is usually used to enhance sound in theaters, places of worship, and other venues with sound systems. It transmits the sound coming over the speakers directly to the user’s hearing device, making it louder and cutting through noise and reverberation. Common types of ALS include infrared (IR) or frequency modulation (FM) transmitters, which work with dedicated headsets, and induction loops, which work with telecoils built into hearing devices.
We can instead use those same systems to cancel the sound at the ears while preserving everything else. We use an adaptive filter to predict the music as heard at the listener’s ears, then subtract it out. What’s left over is all the other sound in the room, including the correct spatial cues. The challenge is adapting as the listener moves.
The video below demonstrates the system using a high-end FM wireless system. The dummy head wears a set of microphones that simulate a hearing device; you’ll be hearing through its ears. The FM system broadcasts the same sound being played over the speakers. An adaptive filter cancels it so you can hear my voice but not the music.
This post accompanies two presentations titled “Immersive Conversation Enhancement Using Binaural Hearing Aids and External Microphone Arrays” and “Group Conversation Enhancement Using Wireless Microphones and the Tympan Open-Source Hearing Platform”, which were presented at the International Hearing Aid Research Conference (IHCON) in August 2022. The latter is part of a special session on open-source hearing tools.
Have you ever struggled to hear the people across from you in a crowded restaurant? Group conversations in noisy environments are among the most frustrating hearing challenges, especially for people with hearing loss, but conventional hearing devices don’t do much to help. They make everything louder, including the background noise. Our research group is developing new methods to make it easier to hear in loud noise. In this project, we focus on group conversations, where there are several users who all want to hear each other.
A group conversation enhancement system should turn up the voices of users in the group while tuning out background noise, including speech from other people nearby. To do that, it needs to separate the speech of group members from that of non-members. It should handle multiple talkers at once, in case people interrupt or talk over each other. To help listeners keep track of fast-paced conversations, it should sound as immersive as possible. Specifically, it should have imperceptible delay and it should preserve spatial cues so that listeners can tell what sound is coming from what direction. And it has to do all that while all the users are constantly moving, such as turning to look at each other while talking.
This post accompanies our presentation “Immersive multitalker remote microphone system” at the 181st Acoustical Society of America Meeting in Seattle.
In our previous post, which accompanied a paper at WASPAA 2021, we proposed an improved wireless microphone system for hearing aids and other listening devices. Unlike conventional remote microphones, the proposed system works with multiple talkers at once, and it uses earpiece microphones to preserve the spatial cues that humans use to localize and separate sound. In that paper, we successfully demonstrated the adaptive filtering system in an offline laboratory experiment.
To see if it would work in a real-time, real-world listening system, we participated in an Acoustical Society of America hackathon using the open-source Tympan platform. The Tympan is an Arduino-based hearing aid development kit. It comes with high-quality audio hardware, a built-in rechargeable battery, a user-friendly Android app, a memory card for recording, and a comprehensive modular software library. Using the Tympan, we were able to quickly demonstrate our adaptive binaural filtering system in real hardware.
This post accompanies the paper “Adaptive Binaural Filtering for a Multiple-Talker Listening System Using Remote and On-Ear Microphones” presented at WASPAA 2021 (PDF).
Hearing aids and other listening devices can help people to hear better by amplifying quiet sounds. But amplification alone is not enough in loud environments like restaurants, where the sound from a conversation partner is buried in background noise, or when the talker is far away, like in a large classroom or a theater. To make sound easier to understand, we need to bring the sound source closer to the listener. While we often cannot physically move the talker, we can do the next best thing by placing a microphone on them.
When a remote microphone is placed on or close to a talker, it captures speech with lower noise than the microphones built into hearing aid earpieces. The sound also has less reverberation since it does not bounce around the room before reaching the listener. In clinical studies, remote microphones have been shown to consistently improve speech understanding in noisy environments. In our interviews of hearing technology users, we found that people who use remote microphones love them – but with the exception of K-12 schools, where remote microphones are often legally required accommodations, very few people bother to use them.
This post accompanies our presentation “Dynamic Range Compression of Sound Mixtures” at the 2020 Acoustical Society of America meeting and our paper “Modeling the effects of dynamic range compression on signals in noise” in the Journal of the Acoustical Society of America (PDF).
Nearly every modern hearing aid uses an algorithm called dynamic range compression (DRC), which automatically adjusts the amplification of the hearing aid to make quiet sounds louder and loud sounds quieter. Although compression is one of the most important features of hearing aids, it might also be one of the reasons that they work so poorly in noisy environments. Hearing researchers have long known that when DRC is applied to multiple sounds at once, it can cause distortion and make background noise worse. Our research team is applying signal processing theory to understand why compression works poorly in noise and exploring new strategies for controlling loudness in noisy environments.
This post describes our recent paper “Acoustic effects of medical, cloth, and transparent face masks on speech signals” in The Journal of the Acoustical Society of America. This work was also discussed in The Hearing Journal and presented at the 179th Acoustical Society of America meeting.
Face masks are a critical tool in slowing the spread of COVID-19, but they also make communication more difficult, especially for people with hearing loss. Face masks muffle high-frequency speech sounds and block visual cues. Masks may be especially frustrating for teachers, who will be expected to wear them while teaching in person this fall. Fortunately, not all masks affect sound in the same way, and some masks are better than others. Our research team measured several face masks in the Illinois Augmented Listening Laboratory to find out which are the best for sound transmission, and to see whether amplification technology can help.
Several months into the pandemic, we now have access to a wide variety of face masks, including disposable medical masks and washable cloth masks in different shapes and fabrics. A recent trend is cloth masks with clear windows, which let listeners see the talker’s lips and facial expressions. In this study, we tested a surgical mask, N95 and KN95 respirators, several types of opaque cloth mask, two cloth masks with clear windows, and a plastic face shield.
We measured the masks in two ways. First, we used a head-shaped loudspeaker designed by recent Industrial Design graduate Uriah Jones to measure sound transmission through the masks. A microphone was placed at head height about six feet away, to simulate a “social-distancing” listener. The loudspeaker was rotated on a turntable to measure the directional effects of the mask. Second, we recorded a human talker wearing each mask, which provides more realistic but less consistent data. The human talker wore extra microphones on his lapel, cheek, forehead, and in front of his mouth to test the effects of masks on sound capture systems.
Our team at the Illinois Augmented Listening Laboratory is developing technologies that we hope will change the way that people hear. But the technology is only one half of the story. If we want our research to make a difference in people’s lives, we have to talk to the people who will use that technology.
Our research group is participating in the National Science Foundation Innovation Corps, a technology translation program designed to get researchers out of the laboratory to talk to real people. By understanding the needs of the people who will benefit from our research, we can make sure we’re studying the right problems and developing technology that will actually be used. We want to hear from:
This is not a research study: there are no surveys, tests, or consent forms. We want to have a brief, open-ended conversation about your needs, the technology that you use now, and what you want from future hearing technology.
To schedule a call with our team, please reach out to Ryan Corey (corey1@illinois.edu). Most calls last about 15 minutes and take place over video, though we’re happy to work around your communication needs.