How will AI design hearing aids so they work?

I’m frustrated.

My point is I had a poor setup and couldn’t hear for a year and a half.

I think AIwill help us all.

1 Like

It all ready is with the faster design of aids, processors, the better ways to manufacture, etc, etc.

1 Like

https://www.starkey.com/hearing-loss-simulator/simulator

Fascinating

Below is the screenshot for the Oticon OPN and OPN S. The Analyze box up front scans and analyzes the incoming sound scene 500 times a second and creates a noise estimate that is used to remove the diffused noise from speech in the Noise Removal box. The Balance box does the MVDR beam forming to attenuate noise sources.

For the More, see the screenshot below. The Scan and Analyze box is still there, doing the same thing as in the OPN, except now it feeds the processed data into the DNN system. The Balance module in the OPN above is now called the Spatial Balancer and is used to do MVDR beam forming to attenuate noise sources. It’s still doing the scan and analyze 500 times per second like in the OPN, except that it was done on only 16 channels in the OPN, but now it’s done on 24 channels on the More.

The Real has the same functionality as the More as far as the Scan & Analyze and the MoreSound Intelligence module is concerned.

Finally the screenshot below is for the Intent. The scan and analyze functionality is buried inside the 4D Sensor module, so it’s not obvious anymore, but it’s there. You can follow the text on the right column in the screenshot to confirm this → “This new input, together with input from the level and SNR detectors, already known from existing version of the MSI, is analyzed, combined and used to determine how the remaining parts of the help system should process the sound scene.”

Anyway, the bottom line is that the scan and analyze functionality is a very basic foundational building block for practically all digital hearing aids, from very simple aids to very sophisticated top of the line aids. It doesn’t become a big thing with the introduction of the DNN, it’s still the same basic fundamental thing and just gets embedded amongst more new technologies, like how it’s embedded in with the 4D Sensor in the Intent.

1 Like

It’s indeed crazy how much sounds we are missing as shown by this simulator. Even for “mild”, it’s already clearly unacceptable.

2 Likes

It causes more problems.

My expectation is that today’s work will be as successful as the first work done by an incredible audiologist (Lydia Kreuk) She is happily retired, and sold her business.

Results are the opposite. I have much more trouble hearing with the hearing aids supplied as they are setup by the provider. Furthermore, acquiring new replacement hearing aids has been painful and the results have been poor. Is this because poor hearing tests were done? Yes, when my current hearing aids were provided. And mistakes were made in setup.

Thanks for posting those. Needless to say, the block diagrams are simplified. So while my positing a processing step doesn’t prove its existence, nor does its absence from a diagram prove its nonexistence.

The blocks are connected by arrows that presumably represent the flow of sound. I have no idea what format it’s in. Different arrows in the same diagram might even correspond to different formats.

But I do know that you don’t do a lookup in a neural network by feeding in sound. My understanding is that you give it a set of features and corresponding values. I have no idea what features they use in hearing aids, and how they’re derived from a sound sample. But that work has to be done in the DNN blocks on the diagrams, and I see no justification for assuming that the resources used are negligible.

Your use of the word “sound” as highlighted above is vague. For digital hearing aids, as with most devices with audio nowadays, as soon as the incoming analog sound is sampled and digitized into bits information, the “sound” becomes digital “data” and no longer takes the form of an analog sound.

So it’s not just the DNN, but all processing modules inside the hearing aids deal with digital information and process digital information. Only the A2D (Analog to Digital) and D2A (Digital to Analog) converters at the input and output stages of the hearing aids deal with analog sounds.

And nobody said that the resources used to implement the DNN is negligible, only that it can be managed by the advances in silicon chips to-date. The discussion only evolves around how extensive is the compute resource required to implement the DNN, because apparently AI implementation by nature requires extensive resources. So the answer to that, as figured out by HA mfgs and countless other companies doing AI-based solutions, is to separate out the training from the implementation of the DNN. By doing that, they’re able to move the bulk of the compute resource upstream, behind the scenes, in the labs, for the training phase, where they’re not constrained and can throw an enormous amount of compute resources to do the training. That’s the hard part, the training part, just like it is the hard part of doing the training to the real biological human brain.

While the implementation of the DNN is for sure not negligible in terms of compute resources, when compared it to the training portion of the DNN, it is still dwarfed by the compute resource required for the training part. The advances in silicon manufacturing can easily handle the compute resource requirement for the DNN implementation, because the condensed smart is in form of the already trained data, and data management is an issue already solved easily in the digital world. Not the vast incoming or outgoing data, but simply condensed intelligent data derived from AI training that reside in memory. This data is easily manageable enough that it can even be stored in software if desired (like with the Tesla Full Self Driving software).

2 Likes

I have a much less technical question. Say I’m driving and the background noise is relatively constant (engine, tires). I could tell my HA to eliminate the background noise now. I can then turn on my radio and the HA should be able to distinguish this new sound from the previous background noise and selectively allow it to pass through. I’m sure aids can do this automatically to some degree, but the difference would be telling them explicitly when the sound that you want to hear is present or absent.

I am wearing Oticon Real1 aids, I have a Jeep Grand Cherokee and don’t even notice road noise with my default program. And we always have the satellite radio playing and i can carry on a conversation with my wife as i drive as long as I know she is talking to me. I can even have the windows open if I am by myself or if we aren’t interested in listening to the radio. I don’t even notice the wind noise of having the windows open. And I can ride my Vespa Scooter with my aids at normal volume while wearing my helmet, without the wind noise bothering me, and it is quieter than not wearing my aids.

1 Like

I’m not aware of any hearing aid currently on the market where you can explicitly tell it what you do and don’t want to hear. That would almost require (correct) identification of all sounds in the current sound scene and present you with a menu in the phone app to let you pick and choose. Also, the nature of any complicated sound scene that deserves such attention to will most likely be so dynamic that your selection might be valid for only a few seconds before you will have to pick and choose again. It can become a tiresome Whack a Mole game where intrusive sounds keep popping up that you want to whack out. It’s just not a practical feature to offer users in my opinion.

The best and probably simplest and most effective solution is probably to emulate the user’s original ability to hear sounds like when they didn’t have their hearing loss yet. This way, all sounds are presented in a way that they don’t compete with each other for your ears, and you can pick and choose what you want to hear and what you want to ignore (instead of what you want to suppress). I believe that this is why Oticon started promoting the open paradigm as the path they want to follow.

This is the most natural solution after all. It is seamless and requires no manual intervention on the user’s part through an interface, except to utilize the brain hearing that everybody already possesses biologically to do the selective focus and suppression of sounds for them.

1 Like

I think we agree………….::

What Dome should I use on my Phonak Audeo Paradise P90 hearing aids.

AI says:

When it comes to selecting the right domefor your Phonak Audeo Paradise P90hearing aids, it’s essential to consider comfort, fit, and your specific hearing needs. Let’s break it down:

  1. Dome Types:
  • Phonak offers various dome styles, each serving different purposes:
    • Open Domes: These allow natural sound to flow into your ear canal and are suitable for mild to moderate hearing loss. They provide comfort and ventilation.
    • Closed Domes: These offer more sound isolation and are ideal for moderate to severe hearing loss. They enhance bass and reduce feedback.
    • Power Domes: Designed for severe to profound hearing loss, these provide a snug fit and better low-frequency amplification.
    • Tulip Domes: These combine the benefits of open and closed domes, providing a balance between natural sound and isolation.
    • Custom Molds: For a personalized fit, consider custom-made molds created from impressions of your ear canal.
  1. Criteria for Changing Domes:
  • Dome Wear and Tear: Replace the dome if it’s torn or if it easily comes off.
  • Hygiene: Regularly clean the dome to prevent wax buildup.
  • Auditory Changes: If you notice a change in hearing, consider changing the dome.
  • Recommendations: Follow your audiologist’s advice on dome replacement intervals.
  1. Cerushield Wax Filters:
  • These tiny filters protect your hearing aid from wax and debris.
  • Change them when you notice reduced sound quality or every 2 to 3 months.
  • Some users may not detect the need for a filter change, so follow the recommended schedule.
  • Handle them with care during replacement.

Remember that individual preferences and ear anatomy play a role. Consult your audiologist for personalized guidance on selecting the right dome for your Phonak Audeo Paradise P90 hearing aids

Widex claim to have used AI in the following way with their Moment HA as follows:

  1. My Widex HAs are connected to the Moment app in my iPhone which is setup for data over the mobile network.
  2. I ask the Moment app for hearing setting recommendations in my current situation
  3. A sample of sound from my current situation is sent over the internet to Widex’s AI computer.
  4. The Widex AI computer analyses the sound and sends back 2 recommendations for improving hearing to the Moment app.
  5. I trial each recommendation and either select one or ask back for more recommendations… If the latter, then steps 3, 4 and 5 are repeated until I accept a recommendation,

All quite practical in its use of AI as my HAs are too small to fit an AI processor in. Rather mundane and not technically exotic but a sensible way to use AI to help the HA user.

1 Like

Widex indeed has a very smart way of utilizing AI. In a way, it is a real time system that queries the preferences of users and use it to enhance their system.

It’s possible that they send over an actual sample of sound over the air wirelessly back to their system. It’s actually quite feasible as people can listen to compressed digital audio like Spotify or iTunes over the internet easily. So it wouldn’t be an issue to send a real sound scene sample back to Widex.

It’s also possible that they send back the condensed and analyzed result of the sound scene sample instead of the actual raw digital audio data of the sound scene sample, simply as to not to tax your cellular data usage too much, because the condensed data sent back would be much less information than the raw digital audio data that would have to be sent back.

And it’s smart to get the users involved to give their own feedback of their impression of the A/B recommendations back to Widex. That is the ultimate valuable feedback information they can get from users. It also allows their AI training to be behind the scene backstage where it should be. The other advantage of this setup is that they continue to get real time feedback from users to continually improve their decision makings of how to process the sound scene → basically how to best tweak the knobs on the sound processing parameters to get the best results, according to the users themselves, because they’re the ultimate judges of how things sound to them. It’s almost as if there’s an HCP live behind the scene that can continuously and remotely adjust the hearing aids’ parameters to the users’ satisfaction.

3 Likes

In my mind, hearing aids in the future will have a higher artificial intelligence so that phone apps will be obsolete. You would simply tell them what you would like to do, similar to Google Assistant, and direct them to recognize some situations automatically and respond appropriately.

I chose driving my car as an example because it seems to me that the background noise is usually pretty consistent. A future hearing aid with AI should be able to recognize the typical engine and tire noise automatically when I begin driving, and also recognize my radio or my wife speaking as a sound to be emphasized over the background noise.

2 Likes

For me it would just put my aids on and forget about them and let them do what needs to be done.

1 Like

We all are looking forward to the day where hearing aids are that smart, but they aren’t yet in reality.

To me, the ultimate hearing aid is the one that lets me hear almost like a normal person, not a hearing aid that is subjected to my wishes and commands. That’s because sometimes, one’s wish may apply in one situation but not others.

For example, if I tell the hearing aids that I want them to suppress all road noises. Then one time I’m not driving but walking along the sidewalk next to a busy road, or maybe one time I’m riding a bike, and I forget to cancel the road noise suppression wish, then I no longer become aware of road noises when I should be to watch out for traffic.

Of course that’s just one small example. I may be in a very noisy restaurant and I tell the hearing aids to only pick up the voice in front of me. But then my waiter sneaks up behind me and started asking me questions and I would be oblivious to even know that the waiter is behind me and talking to me.

Nor do I wish to be in a cocoon all the times and just want to have selective hearing from the hearing aids. By doing so, I don’t think I would be doing any favor to my brain hearing by artificially keeping me in a world of sound isolation like that. I’d rather hear everything well like a normal hearing person and be able to use my brain hearing to tune into what I want to hear and tune out what I don’t want to hear. Otherwise, I’m kind of enabling the hearing aids to handicap my brain hearing into a total dependency of an artificial world of sounds instead of hearing a real world of sounds.

Environmental sound awareness to me is much more important than selective awareness, because that’s the essence of life to me in the world of hearing, to be able to and want to hear everything around me.

2 Likes

I agree I want aids that i don’t have to think about at all all day long.

Me too. As it is I’ve fallen in a rabbit hole.

1 Like