Cart 0
 

How it works

 
How_it_works_banner.jpg
 
 

HeardThat takes a new approach

 

While many attempts have been made to improve the long-standing issues with speech in noise, none have yet offered a reliable solution. Machine learning, however, is a new approach that has resulted in the first major breakthrough in many years.

GettyImages-1148091792.jpg


What is the difference between the traditional approach and machine learning? It comes down to this. What’s true for humans is also true for computers: it's hard to precisely define what noise is, but you know it when you hear it.

 
 
 
 

Traditional methods try to define noise mathematically so it can be identified and reduced. The problem is, in order to make the mathematical models manageable, too many simplifying assumptions have to be made. Consequently, the results can only go so far.

In contrast, machine learning listens to a lot of speech, a lot of noise, and a lot of speech in noise until it learns the difference. This is more like how humans learn to separate speech from noise: from examples.

Machine learning separates speech from noise
 

Machine learning separates speech from noise

 


It’s still a hard problem, but now the computer is doing the heavy lifting. To create HeardThat, years of research resulted in neural networks that were trained with thousands of hours of recorded speech to learn what is useful speech and what is just noise. 

 
 
 

For the reasons described above, a traditional speech assistive device is often a blunt instrument. Either it amplifies everything, including the noise, or it tries to cancel the noise but ends up reducing the speech too. 

Traditional denoising suppresses all sound - noise and speech
 

Traditional denoising suppresses all sound—noise and speech

 


HeardThat has more precision. It doesn’t suppress noise, it separates the noise and discards it, leaving just speech with enhanced intelligibility.

HeardThat discards noise and leaves the speech
 

HeardThat discards noise and leaves the speech

 

Explore how HeardThat works in noisy surroundings.
(Don’t hear the audio? Try viewing in a desktop browser.)

HeardThat demo
Loading...
original sound
with HeardThat

 
 
 
AdobeStock_191056399.jpeg

Wearable devices are unable to provide the necessary resources to do the job. They lack the necessary memory, battery life, and processing power. 

But there’s good news. We already possess the device we need. Our smartphones have more than enough power and they are very capable of running the machine learning algorithms.

 
HeardThat-Devices.png

Rather than creating a new device, HeardThat makes the hearing devices you already own work better, simply by using your smartphone. (Note: HeardThat does all its processing right on the phone. No audio is sent to the cloud.)