Using Notch for Low-Cost Motion Capture

This semester, I was fortunate to be able to toy around with a six-pack of Notch sensors and do some basic motion capture. Later in the semester, I was asked to do a basic comparison of existing motion capture technology that could be used for the tracking of microphone arrays.

Motion capture is necessary for certain projects in our lab because allows us to track the positions of multiple microphones in 3D space. When recording audio, the locations of the microphones are usually fixed, with known values for the difference in position. This known value allows us to determine the relative location of an audio source using triangulation.

For a moving microphone array, the position of each microphone (and the space between them) must be known in order to do correct localization calculations. Currently, our project lead Ryan Corey is using an ultrasonic localization system which requires heavy computing power and is not always accurate.

This segment of my projects is dedicated to determining the effectiveness of Notch for future use in the lab.

User Interface

Notch is used with an application available for iOS devices. The user interface is quite clean and easy to understand. The biggest pain point I had with the actual instructions was some basic spelling mistakes, and the lack of an “s” when it instructed the user to turn on a specific number of notches. This was confusing simply because I thought I had to turn on a specific notch instead of X total notches.

Pairing notches is a relatively easy process. Each notch has a multi-color LED embedded that communicates its status as well as a signifier for where to place a notch on the body. Before a user can capture motion, there is a display that shows where each colored notch is supposed to be on the body. The app will often ask you to calibrate the notches which is a simple procedure that takes no more than 30 seconds.

In my experiments, I did run into an issue for non-Real Time capture. Sometimes, when the last notch is downloading its data to the iPad, the app would crash. I was not able to figure out if this is a bug related to an individual notch or the application as it was not repeatable day after day of testing.

The process to capture motion is very straightforward.

For non-RT capture you can select the duration of the capture and the frequency of readings from two drop-down menus. Once the notches are in place on the body, you press a button which instructs the individual to remain steady while the notches initialize. After this 5 second period, the capture begins and displays a progress bar along the bottom of the screen proportional to the desired capture time.

After the capture is complete, the data is downloaded from each notch to the iPad, with another progress bar that displays how much data has been received. Additionally, as the data is downloaded, the transmitting notch will flash blue.

Below are two videos showing the motion capture, as well as the resulting model.

It should be noted that the right arm was not properly captured, which may have been a result of faulty downloading. Key highlights include the great motion capture of the head, which would be extremely beneficial when doing recordings wearing the sombrero array, or any other head mounted array.

For RT capture, the process of selecting the frequency is the same, however the duration is not necessary. The application has a selection option for automatically downloading the capture data after the capture ends. Below is a video of RT capture.

Again, the right arm has trouble, which may come from a calibration error.

External Uses

As mentioned before, Notch would assist in providing 3D spatial information about microphone position during audio recordings. By providing the model and motion data that occurs during the recording, using a 3rd party software, almost-exact measurements could be created for post-processing of audio.

This would be done in two steps. First, a model of the individual wearing the notch sensors would be created within the notch system. I used the default model, but personalized models can be created by specifying measurements for each of the “Bones” that are displayed. A “Bone” is the white section of the model that represents major sections of the body: Hand, head, torso, hips, upper arm, forearm, thigh, calf, and foot.

When the capture model is exported into a 3rd party software like Unity, we can attach individual bodies to the model that represent microphones. Finally, when we run the capture, we can export the location of the microphone points with high accuracy, as long as the capture represented the actual motion correctly.

At this time, I have not been able to construct the second half of this system, but look forward to working with the team in the future in order to bring low-cost motion capture to the research group.

A quick comparison . . .

The ease-of-use and relatively inexpensive cost ($399) of this system makes it very attractive option for motion capture. Excluding any systems that require full body motion capture suits, I found two other consumer products available currently. These are Shadow, and XSENs.

Unfortunately, neither provide pricing details without requesting a quote for a specific set. I can say that Shadow requires the user wear the full 17-sensor setup, which includes a number of wires, and looks cumbersome. XSENs provides industry standard motion tracking and capture for use in many different fields including automotive, flight, and defense. When looking for the price of the human motion capture systems, I found that a full body sensor kit would run over $10000.

All three systems allow captures to be exported in a number of formats which can be used by most 3D modeling software like Unity and Blender.

A final important note about notch sensors is that they are extremely durable and waterproof. Although I cannot envision using our microphone arrays underwater, it is cool that the product itself is well built.

For these reasons, I recommend that our research group pruchase one six-pack of Notch sensors in order to determine if the 3D spatial tracking is equal to or better than the current system. These are available at




Leave a Reply