Reckless Speculation AI in Hearing Aid Technology

I believe that the most objective way to compare the difference between Phonak using only directionality versus Oticon that includes DNN-based noise reduction is to listen to the Hear Advisor recordings and scores. The scores for Speech in Quiet environment show that Phonak scores slightly better 4.8 versus Oticon at 4.6. However, when you look at Speech in Noise the Phonak only scores 2.3 compared to 3.7 for Oticon Intent. I believe that this shows that DNN-based noise reduction improves the SNR at least for the situation used in this comparison. Obviously real-world situations vary considerably and there will be situations where this could be different.

Oticons wp on 4-D sensors and DNN states that “When the 4D Sensor technology is deactivated, an improvement of up to 1.5 dB (comparing the dotted blue curve and dashed grey curve in Figure 4) is mainly attributed to the updated DNN 2.0 on the new Sirius platform.” Figure 4 shows a maximum SNR improvement of approximately 8 dB. Including the 4D Sensor technology an additional improvement of approximately 2.5 dB is shown which I would attribute to beamforming but can’t be certain. So why does this graph only show a total of approximately 10 dB SNR improvement - I believe that is because Oticon’s default setting is for 10 dB improvement but have no information to verify this.

I guess we read what Phonak said differently then. You read it as “Phonak is saying THEY are not able to use DNN inside the hearing aid”, but I read it as Phonak saying NOBODY (including themselves) has been able to use DNN to reach the market. That’s why I think it’s our interpretations of the semantics that are different.

Below is an excerpt from the Phonak paper at the very end of their conclusion, for reference.

The potential of a hypothetical application of DNN for advanced denoising in hearing aids that Phonak said they showed is the excerpt below in their paper (before the conclusion, of course). Note that it says "offers a potential that CURRENT hearing aids (not “our” hearing aids) do not have access to yet.

Here is some information from a company that is working on providing DNN-based noise reduction technology for hearables (hearing aids, earbuds, and headphones) to significantly improve speech intelligibility in noisy environments. While they do not have any commercial products available, as yet, it appears that some may be released possibly this year. They have a demonstration on the reduction of road noise (fairly easy comparison), but it is quite an impressive improvement.

The implications of this technology are game changing for the hearing device industry. For people with mild to moderate hearing loss they will be able to improve speech understanding using simple OTC type devices that should be relatively inexpensive and will not require an audiological exam or professional assistance. Anyway go check out their site - I think you will find it impressive!

FEMTOSENSE

I believe that you’re correct in this case. The graph is showing an example of a user setting 8 dB attenuation max in difficult environment. So the dotted blue line shows that if the 4D Sensor is disabled, it’ll be around 2 dB in easy environment, and the most difficult part of the difficult environment will cause it to go up to 8 maximum, as specified by the user. But with the 4D Sensor enabled, it can be anywhere between 6.5 to 10 dB, depending on the 4D Sensor’s interpretation of intent.

The 2.5 dB above the 8 dB max specified does not necessarily have to come from any beamforming, however, because remember that the Neural Noise Suppression can go up to 12 dB by itself regardless of beamforming. Let’s say that there is no head movement, and the 4D Sensor interprets it as concentrated listening and increase focus to the front and increase the SNR of sounds in the front. This “directionality” doesn’t have to be front beamforming, and is also likely not front beamforming, because front beamforming would tend to “block” out sounds on the sides and rear very aggressively, which is against the Oticon open paradigm.

This directionality can easily be achieved through rebalancing, giving a slightly louder volume to the front sounds and less volume to the real sound, which is made possible in how the DNN has control of all sound components in the sound scene. This can automatically increase the SNR of the front sounds by 2.5 dB as a front directionality focus without even touching beamforming per se. This will result in a lessened presence of the side and rear sounds in the rebalancing, but at least they’re not being too aggressively blocked out as would have happened if real front beamforming had been used. So the open paradigm is still preserved, side and rear sounds can still be heard (instead of being blocked out by beam forming), albeit at a lessened volume compared to front sounds.

To answer your very last question, this graph is only showing a 10 dB improvement with the 4D Sensor because it looks like the user has limited the default max neural noise suppression without the 4D Sensor to 8 dB. Had the user sets his max NNR without 4D to 10 dB, then it would have shown 12 dB with the 4D sensor. And if the user had chosen 12 dB max without the 4D Sensor, then the dashed blue line would approach 12 dB for the most difficult situation and there would have been no wiggle room for the 4D Sensor to kick it up any further than the 12 dB ceiling anymore.

Hold on, what are you trying to compare?

I think it’s worth keeping the the back of everyone’s mind that there is currently no speech-in-noise ‘benchmark’ for hearing aids, and numbers of noise reduction coming out of specific manufacturer’s labs are specific to whatever artificial lab set-up they are using to measure their device. As such, direct comparisons are a bit suspect and translation of those values to the real world, to less standardized hearing losses, and to less standardized listening conditions are also a bit suspect.

I applaud Oticon’s application of their DNN and I share the optimism about the future role of this type of AI in hearing aids, yet I also think someone above made a salient point about mature and immature technology. When the Opn came out the functional difference in my patients’ experience felt pretty clear. Now comparing the Lumity to the Real or the Intent it is. . . less so.

I agree that laboratory provided results are based on the lab setup and may vary depending on the situation, which is why I indicated that real world situations may vary. I will also say that I trialed both the Phonak Lumity 90 and the Oticon Intent 1 aids and found that the Intents provided better speech understand in both quiet and noisy environments. Of course, that is only my perception and numerous factors are involved that may cause a difference for other people. I find that each one is better on certain items than the other and ultimately each person needs to decide what is best for them.

Marketing and white papers aside, I can vouch for the speech clarity in noise improvements in the latest Oticon Intents (which I am now trying out) over both of last year’s models from Oticon and Phonak. My general hearing loss is just moderate but I was increasingly struggling in noisy situations. The Intents are a big improvement. Bluetooth streaming hasn’t improved significantly but that seems to be the state of the HA industry at the moment. However, as someone else pointed out, streaming isn’t primarily why one wears hearing aids and until better miniaturization arrives spotty streaming will be a reality. They just don’t have the size and power to compete with ear pods built just for audio streaming.

1 Like

My general fear with “AI” hearing aids is a move from restoring what one would consider “normal hearing” to a “selective hearing” rather, favoring what the AI ascertain that I want to hear. Hearing Aids are already doing that to an extent, focusing on speach at the expense of everything else.

That’s an attitude you can afford to have up to a point. Then your hearing gets worse and ‘normal hearing’ recedes in the rear-view mirror.

2 Likes

With regard to Oticon’s brainhearing philosophy of leaving the listener (with hearing aids) to sort out what’s important from what’s not, my hangup is the fact that the speech & hearing part of my brain is damaged after 78 years of use and I’m not sure that it’s up to the task of sorting things out that Oticon is expecting it to do. Assuming that a damaged brain is up to that task is a critical assumption to Oticon’s decision to provide the listener with access to all sounds in the soundscape even when the sound has been cleaned up and rebalanced.

I get your point, @billgem . Hearing losses vary widely, and the ability to discern differences between various sounds by the brain is also vastly different between people, not just by their ability to do it, but also by their preferences whether they want to hear everything or just speech only.

I think when Oticon introduced the open paradigm back in 2016, they made a gamble that their subscribing to the open paradigm will be a welcomed thing by enough people that their line of aids designed for the open paradigm will be sufficiently successful enough. Surely it’s not for everybody, and I don’t think that Oticon is under the illusion that it’s for everybody either. But apparently they’re betting that it should work for enough people, and that’s why they kept on sticking to this paradigm through the next 4 generations of aids (OPN S, More, Real, and Intent). If it had been a resounding dud, I’m sure they would have dropped it like a rock after just the first generation of aid.

Also don’t forget that Full Directional is just a click away in any Oticon program if that’s what the user really prefers for their directionality setting. How aggressive the Oticon Full Directional mode is compared to the other aids’ brands is another thing that can be discussed, but at least the option is there at the user’s finger tip (literally, in the MoreSound Booster button available in the Oticon phone app).

2 Likes

Whisper failed, they thought people was selling to carry the mounstrocity call brain
and using a large device with 675 batery. They did have some real clever people,
but it takes a lot of resources to make a good hearing aids. They simply did not have
enough money.

They prob sold the IP, Don was a very well regarded and respected audiologist he probably
was able to find someone who was willing to buy all the IP.

Everybody is talking about AI, in hearing aids.

NONE seem to think the obvious, when are we going to see it in the fitting software?

when would we see for example an audiometer with a fitting assistance which can

suggest proper masking etc. Yes, there are legal issues if the AI does all the testing.

But a fitting assistance using AI is obvious, yet No one seem to have thought of that

If you have read the threads about Whisper that were on this forum, you might have come to a different conclusion than this. I was not under the impression at all that they ran out of money. They probably started the whole thing with an ulterior motive of demonstrating a viable enough technology in order to be able to get a larger company to buy them out for the IP, which they did achieve that goal.

In hindsight, it probably explained some of their more controversial choices, from using a “brain” unit people have to carry with, to a lease model. Many people (myself included) questioned the viability of these choices, but now in hindsight, they fit in with their ultimate goal of getting bought out.

I’m curious what Donald Schum ends up doing now… Probably retired with a boat load of money and sipping Mai Tai on a beach somewhere… :slight_smile:

That’s not their philosophy. Oticon hearing aids work really hard to re-process sound to remove background noise and improve SNR so that you can hear more clearly. They simply don’t do it with traditional beam forming and the recognize that some non-speech sounds are still important sounds. But they are highly processed. If you’re looking for low sound processing, usually you want to look at Widex.

“Brainhearing philosophy” is not “let your brain do it”, it’s “use what we know of how the auditory system/brain works to try to create processing strategies for hearing aids that can best support those”. It’s a good philosophy.

2 Likes

The Oticon Genie 2 has already has a Fitting Assistance feature in it for quite a while now. It gives you a list of issues to choose from, and based on your answers, it suggest appropriate actions to remedy your issues, and can carry out the actions themselves if you allow it.

It’s not difficult to put AI into the fitting software, but the biggest aspect of it is the need to get users’ feedbacks in order to do what action to take. The bigger obstacle in the fitting process is the prolonged and sometimes repeated in-office visits for adjustments with the HCP. The fitting process up front, with the audiometry test and choosing the appropriate dome/mold fitting, receiver sizing, etc, is unavoidably needed to be done with an HCP’s help. But if the HA industry really wants to, it could take things a step beyond Remote fitting sessions and just put an AI into the phone app to ask the users what their issues are, then automatically adjust the aids after that in trying to alleviate or solve their issues. This removes the need for repeating office visits after the initial REM and fitting for subsequent adjustments.

Right now, a critical component to the success of the whole thing is the competence of the HCP in knowing how to make the proper adjustments to reach their clients’ satisfaction. A good AI can probably fill in this gap nicely so that we can remove what seems to be a HUGE variability in determining the success of the clients.

1 Like

They lost a ton of money, I knew someone who worked there who was promised many things
and ultimately lost the job.

The prob recover a small fraction of what it was invested. Everybody knew this was going to
be a flop from day 1. The instrument was big, to begin with.

The fitting assistant is similar to what they gave back in the Epoq … There are many improvement for which AI could work well, from masking assistant, It can analyze datalogging, Hi diagnostics, etc.

what you talk its been done, widex is widex my sound.

I think this clarification is very well put. Oticon is definitely NOT saying that they’ll just throw everything at people UNPROCESSED, and people will simply have to use brain hearing to fend for themselves and sort everything out. If that was really implied, Oticon would be the first line of aids to go out of business. But nevertheless, it’s still very helpful to make this clarification so that people don’t have any misunderstand at all.

However, Oticon DID bring up and promote the concept of brain hearing themselves (not invented it, just brought it up) to assure people that once they can hear all the processed sounds that Oticon aids deliver to them (in a clear and balanced manner), people should not worry that they won’t have the ability to sort them out in order to focus on the speech, because they do if they want to. In a sense, it’s Oticon’s way of saying “Don’t worry, we’ll give you a special cake such that you can have the cake and eat it, too. You don’t have to limit yourself to just eating the cream (the speech portion) like before. With our special cake, we make everything in the cake edible.”

The key differentiation to remember is that it’s NOT “you can use brain hearing without our help”, but rather it IS “you can use brain hearing again with our help.”

1 Like

I have it on good authority from inside Whisper that money was in fact the issue.

Don Schum is apparently now working with Meta.