Our team at the Illinois Augmented Listening Laboratory is developing technologies that we hope will change the way that people hear. But the technology is only one half of the story. If we want our research to make a difference in people’s lives, we have to talk to the people who will use that technology.
Our research group is participating in the National Science Foundation Innovation Corps, a technology translation program designed to get researchers out of the laboratory to talk to real people. By understanding the needs of the people who will benefit from our research, we can make sure we’re studying the right problems and developing technology that will actually be used. We want to hear from:
People with hearing loss who use hearing aids and assistive listening devices
People who don’t use hearing technology but sometimes have trouble hearing
Parents, teachers, school administrators, and others who work with students with hearing loss
Hearing health professionals
People who work in the hearing technology industry
This is not a research study: there are no surveys, tests, or consent forms. We want to have a brief, open-ended conversation about your needs, the technology that you use now, and what you want from future hearing technology.
To schedule a call with our team, please reach out to Ryan Corey (email@example.com). Most calls last about 15 minutes and take place over video, though we’re happy to work around your communication needs.
Have you ever wondered what it would sound like to listen through sixteen ears? This past March, hundreds of Central Illinois children and families experienced microphone-array augmented listening technology firsthand at the annual Engineering Open House (EOH) sponsored by the University of Illinois College of Engineering. At the event, which attracts thousands of elementary-, middle-, and high-school students and local community members, visitors learned about technologies for enhancing human and machine listening.
Listen up (or down): The technology of directional listening
Our team’s award-winning exhibit introduced visitors to several directional listening technologies, which enhance audio by isolating sounds that come from a certain direction. Directional listening is important when the sounds we want to hear are far away, or when there are many different sounds coming from different directions—like at a crowded open house! There are two ways to focus on sounds from one direction: we can physically block sounds from directions we don’t want, or we can use the mathematical tools of signal processing to cancel out those unwanted sounds. At our exhibit in Engineering Hall, visitors could try both.
This carefully designed mechanical listening device is definitely not an oil funnel from the local hardware store.
The oldest and most intuitive listening technology is the ear horn, pictured above. This horn literally funnels sound waves from the direction in which it is pointed. The effect is surprisingly strong, and there is a noticeable difference in the acoustics of the two horns we had on display. The shape of the horn affects both its directional pattern and its effect on different sound wavelengths, which humans perceive as pitch. The toy listening dish shown below operates on the same principle, but also includes an electronic amplifier. The funnels work much better for directional listening, but the spy gadget is the clear winner for style.
This toy listening dish is not very powerful, but it certainly looks cool!
These mechanical hearing aids rely on physical acoustics to isolate sound from one direction. To listen in a different direction, the user needs to physically turn them in that direction. Modern directional listening technology uses microphone arrays, which are groups of microphones spread apart from each other in space. We can use signal processing to compare and combine the signals recorded by the microphones to tell what direction a sound came from or to listen in a certain direction. We can change the direction using software, without physically moving the microphones. With sophisticated array signal processing, we can even listen in multiple directions at once, and can compensate for reflections and echoes in the room.
With March 3rd being World Hearing Day, WHO-ITU (World Health Organization and International Telecommunication Union) released a new standard for safe listening devices on February 12th, 2019. As our group researches on improving hearing through array processing, we also think that preventing hearing loss and taking care of our hearing is important. Hearing loss is almost permanent, and there are currently no treatment for restoring hearing once it is lost. In this post, I will revisit the new WHO-ITU standard for safe listening devices, and I will also test how loud my personal audio device is with respect to the new standard.
Summary of WHO-ITU standard for safe listening devices
In the new WHO-ITU standard for safe listening devices, WHO-ITU recommends including the following four functions in audio devices (which is originally found here):
“Sound allowance” function: software that tracks the level and duration of the user’s exposure to sound as a percentage used of a reference exposure.
Personalized profile: an individualized listening profile, based on the user’s listening practices, which informs the user of how safely (or not) he or she has been listening and gives cues for action based on this information.
Volume limiting options: options to limit the volume, including automatic volume reduction and parental volume control.
General information: information and guidance to users on safe listening practices, both through personal audio devices and for other leisure activities.
Also, as it is written in the Introduction of Safe Listening Devices and Systems, WHO-ITU considers safe level of listening to be listening to sound with loudness under 80dB for a maximum of 40 hours per week. This recommendation is stricter than the standard currently implemented by OSHA (Occupational Safety and Health Administration), which enforces a PEL (permissible exposure limit) of 90dBA* for 8 hours per day with the exposure time halving with each 5dBA* increase in the noise level. NIOSH (The National Institute for Occupational Safety and Health) also has a different set of recommendations concerning noise exposure. They recommend an exposure time of 8 hours for a noise of 85dBA* with the exposure time halving with each 3dBA* increase in the noise level. With this recommendation, workers are recommended to be exposed to noise with 100dBA* for only 15 minutes per day!
Augmented listening systems “remix” the sounds we perceive around us, making some louder and some quieter.
I am one of millions of people who suffer from hearing loss. For my entire life I’ve known the frustration of asking people to repeat themselves, struggling to communicate over the phone, and skipping social events because I know they’ll be too noisy. Hearing aids do help, but they don’t work well in the noisy, crowded situations where I need them the most. That’s why I decided to devote my PhD thesis to improving the performance of hearing aids in noisy environments.
As my research progressed, I realized that this problem is not limited to hearing aids, and that the technologies I am developing could also help people who don’t suffer from hearing loss. Over the last few years, there has been rapid growth in a product category that I call augmented listening (AL): technologies that enhance human listening abilities by modifying the sounds they hear in real time. Augmented listening devices include:
traditional hearing aids, which are prescribed by a clinician to patients with hearing loss;
low-cost personal sound amplification products (PSAPs), which are ostensibly for normal-hearing listeners;
advanced headphones, sometimes called “hearables,” that incorporate listening enhancement as well as features like heart-rate sensing; and
augmented- and mixed-reality headsets, which supplement real-world sound with extra information.
These product categories have been converging in recent years as hearing aids add new consumer-technology features like Bluetooth and headphone products promise to enhance real-world sounds. Recent regulatory changes that allow hearing aids to be sold over the counter will also help to shake up the market.