New AI-Powered Hearing App ‘HeardThat’ to Rise Above the Noise at CES 2020

New AI-Powered Hearing App ‘HeardThat’ to Rise Above the Noise at CES 2020

Advanced machine learning algorithms turn a smartphone into a sophisticated hearing assistive device enabling clear conversations even in noisy places

December 05, 2019 11:30 AM Eastern Standard Time

VANCOUVER, Canada–(BUSINESS WIRE)–Singular Hearing, a technology developer of progressive audio solutions, will unveil its AI-powered hearing app, HeardThat, at CES 2020 in booth 31504 in Eureka Park. Harnessing the power of machine learning, HeardThat turns a smartphone into a futuristic hearing assistant, tuning out background noise to enable individuals with hearing loss to hear speech more clearly, allowing them to engage in conversations they would otherwise have trouble hearing (view HeardThat App video

A global epidemic, more than 466 million people worldwide are affected by some degree of hearing loss, and this staggering number is rapidly increasing. “Often the first step in helping people with a hearing problem is an in-ear hearing aid. However, the weakness of even the most sophisticated hearing aids is the challenge of separating speech from background noise. Hearing aids tend to amplify all sound, making it difficult to have one-on-one or group conversations in a noisy environment. It can be frustrating enough that a person with hearing loss may even avoid a social outing or public place,” states Bruce Sharpe, Founder and CEO Singular Hearing. “Machine learning gives us the unique power and flexibility to solve this long-standing problem. We are passionate about putting it to use through HeardThat and providing new options for the millions of families, friends, and colleagues who suffer from hearing loss.”

HeardThat utilizes advanced machine learning machine algorithms to separate speech from noise. It listens to the noisy environment and delivers denoised speech, to the individual’s Bluetooth-enabled hearing aid or other listening devices via their smartphone. Sharpe concludes, “Machine learning algorithms require too much processing power to run on hearing aids or other small devices. By leveraging the smartphone, our HeardThat App is freed from hardware constraints and so can do much more. And because it is an agile and flexible software solution, HeardThat can be quickly and continually improved upon.”

HeardThat, a new generation of software hearing assistive solutions will be available in Q1, 2020 on Android and iOS.

Experience HeardThat in Person at CES 2020

A perfect setting to test the groundbreaking HeardThat app, Singular Hearing invites CES 2020 attendees to experience a moment away from the noise while still being right in the middle of the show floor in Eureka Park at booth 31504.

CES Registered Press can email Janice Dolan ( to schedule a private demo and briefing with HeardThat Founder Bruce Sharpe.

For more information about HeardThat, please visit

About Singular Hearing

At Singular Hearing, we are passionate about solving real problems in new ways. We have deep expertise in machine learning, audio, and speech processing and are using that to create innovative products to help people hear better.

Our first product, HeardThat, turns your smartphone into a sophisticated hearing assistive device that brings out the conversation in noisy social situations.

Follow HeardThat on social media @:

Instagram: @HeardThatApp
Twitter: @HeardThatApp
Facebook: @HeardThatApp
LinkedIn: HeardThat

Singular Hearing is a subsidiary of Singular Software, and is located in the beautiful city of Vancouver, British Columbia, Canada.

1 Like

This sounds really interesting. It sounds like you have to point your phone toward the speaker. It’ll be interesting to see how it works with multi-person conversations.
In the product description they say that a standard HA is physically too small to accommodate AI processing, so they use the phone to do the processing. Maybe in the near future using one’s phone as processing hardware rather than having the processing done inside the HA will be common.
Then I guess all you’d need is a tiny device with a mic and speaker that would fit into your ear. The mic’s signal would be sent to the phone for processing, and then the processed signal would be sent back to the in-ear speaker. The concept sounds great. I guess we’ll see.


So long Roger Pen and Roger Select. We hardly knew ya. And who in their right mind would want to drop $900 plus on a roger pen or select if “heardthat” app does the same thing for (basically free) on your smart phone. Looks like Phonak might have to drop prices real soon or go the way of the dodo bird.


It will be interesting to see how this app works with hearing aids, since it appears a smartphone (somehow) takes control of a hearing aid in a noisy situation. Cart before the horse?

Our first product, HeardThat, turns your smartphone into a sophisticated hearing assistive device that brings out the conversation in noisy social situations. Want to know more? Sign up below to be part of the beta testing program and to be informed when it will be released.

Sounds like a sales pitch.

I received an invite to test the beta version of this app today. So far I have only tested it around the house, but it does work. It will be interesting to see how it performs in places like crowded restaurants.

I applied to be a beta tester about a week ago. Heard nothing for a week. After submitting another inquiry via their web page, I got this reply:

“Unfortunately for you, we are starting with Android phones first. We hope to start iPhone testing in a few weeks”

LAG! They cannot avoid it!

I think the biggest problem all apps, like this for smart phones, have to overcome is processing lag, which is inherently due to the type of operating systems in our phones.

How is it that our HAs can digitally process audio without any perceived lag? The digital signal processor is dedicated to only one thing: audio in / audio out … our hearing … it’s a very specific operating system that only processes the sound we need to hear.

Phone apps don’t get to have direct access to processors. Time on processors is scheduled by the operating system, which is a general purpose operating system and not a real-time operating system, so that the best signal processing we hear might be heard 100 milliseconds later, which sounds like echo.

This is what would be needed by a phone:

mic-in > A/D convert > DSP (Digital Signal Processor) > D/A convert > headphones out

And all the above would need to be completely dedicated to ONLY the hearing app (OS gives the app sole use of that audio chain).

So, if there is some magical phone app that can digitally process live audio and get it to your ears with so little delay that you cannot perceive the delay, I will be surprised, and the world of hearing aid manufacturers will have a serious problem to compete with.

I’ll believe it when I hear it! (and I will be glad, because I’ve wanted this for a while)

@gruuvinrob so you’re experiencing lag with the beta test app then?

I signed up but I haven’t received an invite (Android)…

gruuvinrob, I agree. I used the app in a loud crowded place today where everyone was talking. It did a good job keeping the background noise down and I could hear the person next to me just fine. But then the app would echo (loudly) what they said and what I said 1/2 a second later. The app would have been fine if it only toned down the background and not try to relay speech. If all the app does is tone down background noise, it doesn’t need to be perfectly real time. But the speech delay on this app is ridiculous. I was using a Samsung S10e, which is a reasonably quick phone, so I can only assume the lag would be worse on a phone with a slower processor.

Anyway, I reported the issue. It will be interesting to see what they say if they respond.

@jayste4 Wired or Bluetooth? …

How’s it going to do that? It still has to separate speech from noise or else it’s just like sticking a plug in your ears.

Btw, your Samsung might feel like a quick phone in its ui but that doesn’t say much about how quickly it processes audio.

See Android's Bluetooth latency needs a serious overhaul - SoundGuys.

Mobile Phone Apps = everybody gets in line to share the processors

I’ve written software for real-time operating systems and general purpose operating systems. Example: things like robotics need very fast direct access to processors that are dedicated to capturing multiple sensor input, doing some complicated math and then generating a specific output based on the combinations of input, and doing these calculations thousands of times per second. That’s what we might call ‘real-time’ operating, because there is very little lag induced by a system that is not also trying to do a bunch of other things, like let other apps share time on the processors. It’s dedicated to doing only one thing and doing it well.

This audio stuff is similar. however, our phones are general purpose. The whole design of our phones are general purpose computers that are designed to run hundreds of softwares concurrently. Compared to our HAs, our phones MUST have this big and heavy operating system that stands in between the apps and the processing hardware, and it lets multiple apps share time on the processors, by taking requests for every access and scheduling time. It’s actually a queue that all apps use. All apps spend time standing in line all the time for their turn to get their math done. Not only does this mean apps only get a slice of the processor, it also means apps get each cycle processed 100 times later than they would on a dedicated system, due to the extra first few steps of scheduling.

Also, this is why we hear Bluetooth lag in some systems and not others. It’s not the fault of Bluetooth, but it’s the system that Bluetooth has to operate on (everybody gets in line).

In other words, this is not the fault of this AI app. I’m sure it’s a great proof-of-concept. This is the inherent nature of our phones, being devices that are designed to run hundreds of different softwares concurrently.

1 Like

If you look at the link I posted above, you can see that some phones have several times the latency of others. What that suggests to me is that there is a lot that can be done to the OS to make latency better or worse. I’m wondering whether latency has not been seen as much of an issue up to now. I mean if we’re talking just Bluetooth, there’s been that much inherent latency in A2DP that improving audio throughput in the phone itself may have seemed a little pointless. Perhaps with the future introduction of LE Audio and with the needs of hearing impaired people becoming less of an afterthought (hopefully) to phone manufacturers, we might see some improvement.

I’m still interested in knowing if @jayste4 has been testing wired or not.


There could be a lot of variance in tested latency, due to the variance in people’s phone systems with respect to: how many system apps does the phone have always running, how many apps does the user have launched at a given time, which phone hardware platform is it, is it newer architecture or not, etc.

It would be nice if phone manufacturers would provide an app sole and dedicated access to a good audio DSP. However, since this particular app does tout AI, this implies machine learning using the general processor or GPU. “AI-Powered” will not happen on the DSP, but could be very effective on the GPU, but that data is taken from the GPU and eventually fed into an audio DSP by way of the CPU, so the main processors are probably still the bottleneck and where the lag will be.

So, yeah… gonna need a phone that is very powerful AND is doing very little else in the background, and even that, while great for a test, is not so great when someone is using their phone normally. Imagine doing processor intensive things on your phone, and while you do that your whole world sounds all echoey for a while.

The challenge for the app designers will be to get the OS to act more like a RTOS and less like a GPOS, for some apps, and then get their app to get priority task scheduling.

I think I’d like to get into testing this, wired.


I’m testing using BT only. The intent is/was to get this app working good with my BT enabled hearing aids which of course do not have a wired option. I’m not interested in testing for wired.

You’re not testing the app then. You’re testing Bluetooth.

Indeed, the app makes one very aware of the shortcomings of BT. I emailed the developers and they responded with some tips.

Mitigating Bluetooth latency

Latency is the term used for the delay between when a sound is produced and when it arrives at your ears. Unfortunately, the current Bluetooth audio standard is not designed to minimize latency and it can add up to 0.2 seconds of delay depending on the phone and listening device. This can result in an echo in the sound which some people find bothersome.

The following options will let you reduce or avoid this problem.

The basics

  • Make sure the directional mode is chosen (Settings > Mode > Directional).
  • Make sure the bottom of the phone is pointing toward you. In Directional mode, this will ensure that your own voice doesn’t come through.
  • HeardThat works best when the phone is lying on the table (not held in your hands, for example).
  • Turn down the hearing aids’ own mics as much as possible. This will reduce the conflict between the audio from HeardThat and the hearing aids.

More effective, but less convenient

  • Don’t use hearing aids in noisy situations. Instead, use wired earbuds or headphones.
  • Continue to use hearing aids, but wear over-the-ear wired headphones connected to the phone.
  • Use an adapter box that supports low-latency Bluetooth. (Example: This can help a lot for earphones, but does not help with Bluetooth for hearing aids. If you’re interested in this option, ask us for details.

I haven’t tried using wired headphones, nor will I. I suppose I was naive enough to believe this app might work some magic using a wireless connection, but avoiding latency is impossible with the tech I have.

For the times want to hear music, tune out background noise, and be aware of my surroundings, I have some nice Sennheiser 660 headphones that do that job perfectly.

@jayste4. Your hearing aids are Phonak or KS9 (Bluetooth Classic)? I’m really motivated to try this out with my Nuraphones. I’m hoping that their personalisation and isolation will be helpful. I’ve got an analogue cable ready to go. My usage scenario is ‘emergency’ communication in noise. I couldn’t see myself walking around with phone/cable/headphones all day.

I’ve said it before, but this type of app will come into its own when LE Audio becomes a reality.


I agree. (blah blah blah since I need 20 characters…)