Acoustic Impulse Responses for Wearable Audio Devices

Featured

This post describes our new wearable microphone impulse response data set, which is available for download from the Illinois Data Bank and is the subject of a paper at ICASSP 2019.

Acoustic impulse responses were measured from 24 source angles to 80 points across the body.

Have you ever been at a crowded party and struggled to hear the person next to you? Crowded, noisy places are some of the most difficult listening environments, especially for people with hearing loss. Noisy rooms are also a challenge for electronic listening systems, like teleconferencing equipment and smart speakers that recognize users’ voices. That’s why many conference room microphones and smart speakers use as many as eight microphones instead of just one or two. These arrays of microphones, which are usually laid out in a regular pattern like a circle, let the device focus on sounds coming from one direction and block out other sounds. Arrays work like camera lenses: larger lenses can focus light more narrowly, and arrays with more microphones spread out over a larger area can better distinguish between sounds from different directions.

Wearable microphone arrays

Microphone arrays are also sometimes used in listening devices, including hearing aids and the emerging product category of smart headphones. These array-equipped devices can help users to tune out annoying sounds and focus on what they want to hear. Unfortunately, most hearing aids only have two microphones spaced a few millimeters apart, so they aren’t very good at focusing in one direction. What if hearing aids—or smart headphones, or augmented reality headsets—had a dozen microphones instead of just two? What if they had one hundred microphones spread all over the user’s body, attached to their clothing and accessories? In principle, a large wearable array could provide far better sound quality than listening devices today.

Over the years, there have been several papers about wearable arrays: vests, necklaces, eyeglasses, helmets. It’s also a popular idea on crowdfunding websites. But there have been no commercially successful wearable microphone array products. Although several engineers have built these arrays, no one has rigorously studied their design tradeoffs. How many microphones do we need? How far apart should they be? Does it matter what clothes the user is wearing? How much better are they than conventional listening devices? We developed a new data set to help researchers answer these questions and to explore the possibilities of wearable microphone arrays.

Continue reading

What is augmented listening?

Featured

Augmented listening systems “remix” the sounds we perceive around us, making some louder and some quieter.

I am one of millions of people who suffer from hearing loss. For my entire life I’ve known the frustration of asking people to repeat themselves, struggling to communicate over the phone, and skipping social events because I know they’ll be too noisy.  Hearing aids do help, but they don’t work well in the noisy, crowded situations where I need them the most. That’s why I decided to devote my PhD thesis to improving the performance of hearing aids in noisy environments.

As my research progressed, I realized that this problem is not limited to hearing aids, and that the technologies I am developing could also help people who don’t suffer from hearing loss. Over the last few years, there has been rapid growth in a product category that I call augmented listening (AL): technologies that enhance human listening abilities by modifying the sounds they hear in real time. Augmented listening  devices include:

  • traditional hearing aids, which are prescribed by a clinician to patients with hearing loss;
  • low-cost personal sound amplification products (PSAPs), which are ostensibly for normal-hearing listeners;
  • advanced headphones, sometimes called “hearables,” that incorporate listening enhancement as well as features like heart-rate sensing; and
  • augmented- and mixed-reality headsets, which supplement real-world sound with extra information.

These product categories have been converging in recent years as hearing aids add new consumer-technology features like Bluetooth and headphone products promise to enhance real-world sounds. Recent regulatory changes that allow hearing aids to be sold over the counter will also help to shake up the market.

Continue reading

Capturing Data From a Wearable Microphone Array

Introduction

Constructing a microphone array is a challenge of its own, but how do we actually process the microphone array data to do things like filtering and beamforming? One solution is to store the data on off-chip memory for later processing. This solution is great for experimenting with different microphone arrays since we can process the data offline and see what filter combinations work best from the data that we collected. This solution also avoids having to make changes to the hardware design any time we want to change filter coefficients or what algorithm is being implemented.

Overview of a basic microphone array system

Here’s a quick refresher of the DE1-SoC, the development board we use to process the microphone array.

The main components in this project that we utilize are the GPIO pins, off-chip DDR3 memory, the HPS, and the Ethernet port. The microphone array connects to the GPIO port of the FPGA. The digital I2S data is interpreted on the FPGA by deserializing the data into samples. The 1-GB off-chip memory is where the samples will be stored for later processing. The HPS that is running linux will be able to grab the data from memory and store it on the SD card. Connecting the Ethernet port on a computer gives us the ability to grab the data from the FPGA seamlessly using shell and python scripts.

Currently the system is setup to stream the samples from the microphone array to the output of the audio codec. The microphones on the left side are summed up and output to the left channel, and the microphones on the right side are summed up and output to the right channel. The microphones are not processed before being sent to the CODEC. Here is a block diagram of what the system looks like before we add a DMA interface to the system.

Continue reading

Talking Heads

Within the Augmented Listening team, it has been my goal to develop Speech Simulators for testing purposes. These would be distributed around the environment in a sort of ‘Cocktail Party’ scenario.

 

Why use a Speech Simulator instead of human subjects?

CONSISTENCY.
Human Subjects can never say the same thing exactly the same way twice. By using anechoic recordings of people speaking played through speakers, we can remove the human error from the experiment. We can also simulate the user’s own voice captured by a wearable microphone array.

 

Why not just use normal Studio Monitors?

While studio monitors are designed to have a flat frequency response perfect for this situation, their off-axis performance is not consistent with that of the human voice. As most monitors use multiple drivers to achieve the desired frequency range, the dispersion is also inconsistent across the frequency range as it crosses between the drivers.

Continue reading

How loud is my audio device? : Thinking about safe listening through the new WHO-ITU Standard

With March 3rd being World Hearing Day, WHO-ITU (World Health Organization and International Telecommunication Union) released a new standard for safe listening devices on February 12th, 2019. As our group researches on improving hearing through array processing, we also think that preventing hearing loss and taking care of our hearing is important. Hearing loss is almost permanent, and there are currently no treatment for restoring hearing once it is lost. In this post, I will revisit the new WHO-ITU standard for safe listening devices, and I will also test how loud my personal audio device is with respect to the new standard.

Summary of WHO-ITU standard for safe listening devices

In the new WHO-ITU standard for safe listening devices, WHO-ITU recommends including the following four functions in audio devices (which is originally found here):

  • “Sound allowance” function: software that tracks the level and duration of the user’s exposure to sound as a percentage used of a reference exposure.
  • Personalized profile: an individualized listening profile, based on the user’s listening practices, which informs the user of how safely (or not) he or she has been listening and gives cues for action based on this information.
  • Volume limiting options: options to limit the volume, including automatic volume reduction and parental volume control.
  • General information: information and guidance to users on safe listening practices, both through personal audio devices and for other leisure activities.

Also, as it is written in the Introduction of Safe Listening Devices and Systems, WHO-ITU considers safe level of listening to be listening to sound with loudness under 80dB for a maximum of 40 hours per week. This recommendation is stricter than the standard currently implemented by OSHA (Occupational Safety and Health Administration), which enforces a PEL (permissible exposure limit) of 90dBA* for 8 hours per day with the exposure time halving with each 5dBA* increase in the noise level. NIOSH (The National Institute for Occupational Safety and Health) also has a different set of recommendations concerning noise exposure. They recommend an exposure time of 8 hours for a noise of 85dBA* with the exposure time halving with each 3dBA* increase in the noise level. With this recommendation, workers are recommended to be exposed to noise with 100dBA* for only 15 minutes per day!

Continue reading

Using Notch for Low-Cost Motion Capture

This semester, I was fortunate to be able to toy around with a six-pack of Notch sensors and do some basic motion capture. Later in the semester, I was asked to do a basic comparison of existing motion capture technology that could be used for the tracking of microphone arrays.

Motion capture is necessary for certain projects in our lab because allows us to track the positions of multiple microphones in 3D space. When recording audio, the locations of the microphones are usually fixed, with known values for the difference in position. This known value allows us to determine the relative location of an audio source using triangulation.

For a moving microphone array, the position of each microphone (and the space between them) must be known in order to do correct localization calculations. Currently, our project lead Ryan Corey is using an ultrasonic localization system which requires heavy computing power and is not always accurate.

This segment of my projects is dedicated to determining the effectiveness of Notch for future use in the lab.

Continue reading

Constructing Microphone Arrays

Microphone arrays are powerful listening and recording devices composed of many individual microphones operating together in tandem. Many popular microphone arrays (such as the one found in the Amazon Echo) are arranged circularly, but they can be in any configuration the designer chooses. In our Augmented Listening Lab, we strive to make these arrays wearable to assist the hard of hearing or to serve recording needs. The microphones that make up our arrays are MEMS microphones in nice breakout boards as shown below:

When placing these microphones into an array, they all share the Bit Clock, Left/Right Clock, 3V and Ground signals. All of the microphones share the same clock! Pairs of microphones share one Data Out line that goes to our array processing unit (in our lab we use an FPGA) and the Select pin distinguishes left and right channels for each pair.

The first microphone array I constructed was using a construction helmet! The best microphone arrays leverage spatial area – the larger area the microphones surround or cover, the clearer the audio is. Sometimes in our lab, we test audio using microphone arrays placed on sombreros – a wide and spacious area. Another characteristic of good microphone array design is spacing the microphones evenly around the area. The construction helmet array I built had 12 microphones spaced around the outside on standoffs and I kept the wires on the inside.

Finally, we use a Field Programmable Gate Array (FPGA) to do real time processing on these microphone arrays. SystemVerilog makes it easy to build modules that control microphone pairs and channels. FPGAs are best used in situations where performance needs to be maximized, in this case we need to reduce latency as much as possible. In SystemVerilog we can build software for our specific application and declare the necessary constraints to make our array as responsive and efficient as possible.

Tutorial 1: Simple Accumulator

Introduction

In this first tutorial we will create a design that will be able to increment or decrement a value by pushing buttons on the FPGA and displaying the hex value on the hex display on the board. This will cover combinational logic design, sequential logic design, and how to interface some of the peripherals on the FPGA as well as loading the design on the board.

Sequential vs parallel programming languages

Solving this problem with a microprocessor that executes instructions sequentially (like an Arduino) is pretty trivial to do. The code might look something like this…

/* PSEUDO CODE (will not compile on Arduino's IDE) */
while(1){
   if (button1){
      accumulator = accumulator + 1;
      while (button1) {}
   } else if (button2){
      accumulator = accumulator - 1;
      while (button2) {}
   }
}

Solving this problem in a hardware description language might not be so obvious. In SystemVerilog we are describing a digital circuit that will execute code in parallel. In order to achieve the same functionality as the sequential code from above we have to create a combinational and sequential logic circuit.

What is a combinational logic circuit?

A combinational logic circuit has a defined output based on all different combinations of its inputs – it’s kind of similar to a math function. Let’s consider an example of a familiar math function: f(x) = x^2. This function always has an output for any real value that is fed to this function: f(0) = 0, f(2) = 4, f(3) = 9, f(3.14159) = 9.8690, etc. In the digital world we can also have functions like this, but our inputs and outputs either have the value of 0 or 1. These functions are called boolean functions, and they are made up logic gates like AND, OR, NOT, NAND, etc. The link below covers the functionality of these logic gates with diagrams and truth tables.

http://www.ee.surrey.ac.uk/Projects/CAL/digital-logic/gatesfunc/

We can piece these logic gates together to create a combinational logic circuit and represent it with a function. Let’s create an XOR (exclusive or) circuit using AND, OR, and NOT gates. We will first create a truth table for XOR.

x  y | z
-------
0  0 | 0
0  1 | 1
1  0 | 1
1  1 | 0

This function’s output is 1 when X is 0 and Y is 1 (X’Y), OR when X is 1 and Y is 0 (XY’). If they are both 1 or both 0 the output is 0. We can write this function as z = x’y + xy’. The concatenation of two variables ‘xy’ represents the AND operation, the ‘ represents the NOT operation, and the ‘+’ represents the OR operation. Our combinational circuit looks like this.

That’s pretty neat, but how do we apply this to the problem we are trying to solve?

So now we know how to create a logic circuit, but how would we make a circuit that can increment and decrement a variable? We can start to think of what our inputs to this circuit would be: two different buttons on the FPGA – where one button will be the signal to increment and the other to decrement. We also need to think about how we can create something like a variable in hardware and how we can control what values it takes on. More specifically, we need to be able to have a way where we can remember what value our accumulator had and how our inputs to this circuit can control what values it takes on. To accomplish this we have to understand how sequential logic works.

What is sequential logic?

* Fill in detailed description *

Finite state machine design

Transferring your design into functional code

https://github.com/juanjm2/I-Vault-Blog-Code/tree/master/Tutorial_1

Quartus Tutorial

Actual FSM Behavior in hardware