Oticon More "deep neural network" processing: Does it WORK?

Explain that to your Audiologist, it maybe able to adjusted for. I don’t have a dehumidifier, or an air cleaner here. Our AC system system is very quiet and I don’t hear it most of the time.

The only things that now disturb my hear is loud exhaust and loud barking dogs. But I am sure that is also normal for everyone.

1 Like

And yet, despite all those improvements, my audiologist opines that the More 1 “is a slight improvement. Nothing groundbreaking”?!

If I could experience what @cvkemp states that the More 1s do for him, I would upgrade from my OPN 1s in a minute.

I look at your audiogram and it looks like your hearing loss is not too bad at all. So it’s possible that if you’re quite happy with the OPN 1s already, you probably won’t find leap and bound improvements with the More 1. Many people on this forum has reported as such (no wow factor for them from the OPN or OPN S to the More). It’s not because the More technology is nothing ground breaking like your audi alluded to, it’s only that you may not be able to benefit as much from the More technology because your hearing loss is not that bad in the first place. Chuck (cvkemp) on the other hand has a more challenging hearing loss so he steadily benefits from the OPN ITE to the OPN S to the More. It’s all different for everyone.

But if you’re not quite happy about the OPN 1s for some reason, especially in the area of speech clarity in noise, it may be worth the time to try out the More 1 as long as you can return it before your trial period ends. If nothing at all, to satisfy your curiosity and confirm that you’re not missing out on much.

1 Like

We are all different, your hearing loss is nothing like mine. I have been wearing Oticon aids now for 11 plus years. This is my fifth set of Oticon aids, and I have worn the OPN1, OPNS1 and now More1 aids. My Audiologist and I have countless hours of tune my aids. We finally got the OPN1 aids the best possible for hearing to discover they weren’t powerful enough, they were ITE aids so the receivers couldn’t be easily changed. The OPNS1 are really good aids but I did have some feedback issues. My Audiologist got it approved for my to have the More1 aids and also have the OPNS1 as my backup aids. I am told the only time I should be without my aids is sleeping and showering. My Audiologist transferee my OPN1 setting to the OPNS1 aids and my the needed fitting changes. And did the same type of transfer from my OPNS1 aids to my More1 aids and made the needed adjustments.
I am going to stick my neck out and say that my Audiologist is the key factor here for my success with the More1 aids.

@cvkemp: Chuck, I don’t think you’re sticking your neck out an inch. You’re exactly right.

Your other mantra about everyone’s hearing being unique is right on, too.

There’s no objective “best” hearing aid. It’s all subjective, IMO.

1 Like

I would like to understand
A how the DNN determines what is noise versus speech,
B how its training data labelers are verified to remove bias,
C
if the DNN is static or it learns further over the lifetime of the hearing aid, and
D the precision/recall/F1 scores of the DNN model.

I’m willing to read technical whitepapers on these but I don’t see much information out there.

@gkumar: May I suggest that you create an account with Audiology Online and then subscribe to some of the talks by Dr Donald Shum of Oticon. Many of your questions will be answered.

As @Volusiano has explained in various posts on the Forum, the More DNN is hardwired onto the DSP chip in the device - it is static.

Oticon’s professional information links on their website explain many aspects about the methodology used to train More’s DNN. The entire process is well-explained in an objective (non-marketing) way.

1 Like

@gkumar: This link is near the bottom of the main page …

Look for this …

Click on the “For Professionals” link.

1 Like

A. Oticon has a voice detection technology to know what speech sounds like. It’s not just DNN but in the OPN and OPN S, and probably even before that, the technology already existed. And it’s not only just Oticon, but I presume that most HA brands can easily detect voice and they do, because the voice/speech has very specific characteristics that can easily be differentiated. As far as noise is concerned, what is not speech would be considered noise.

B. I don’t understand this question. What is a training data labeler? What bias are you talking about here that need to be removed? And why?

C. Once the DNN has been trained sufficiently well enough and verified to produce good result, and once it gets loaded into the More 1 and released, it no longer gets trained further inside the actual hearing aid. But it doesn’t prevent Oticon to continually train and refine the DNN further in their labs if they deem it necessary, and at some point implement a better/newer version of the DNN via a firmware update.

D. Oticon publishes a whitepaper (see link below) that shows results of the More vs 2 other high-end competitor HAs. You can find all the details in there.

E. Below is the Oticon whitepaper on the Oticon More DNN and other tech papers. They can easily be found in Google search if you type in something like Oticon More whitepaper.

5 Likes

:+1:t2::astonished: @gkumar: There you go! @Volusiano has handed it to you on a platter!

It’s fascinating reading - especially for Oticon users.

1 Like

I’ve already posted these in the original (one of the first) thread that announced the Oticon More back in December 2020 or January 2021, but I’ll post these 4 Youtube videos here again in case you’re interested to learn and understand how DNN works better.

These videos are generic and not made by Oticon for the More/about the More. But the principle is the same and fully applicable to how Oticon trained the More. Then if you watch the last video which is an Oticon video about the More, you’ll recognize them talking about the same principles of training a DNN and back propagating to improve the results, etc. Then everything will click much better for you.

3 Likes

What I hear is so very much better than any other aids I have tried. That goes for conversations be it one on one or a number of individuals talking. Also, too me music hasn’t been this pleasant since my 20s. And I can tell the difference between bird songs and even determine the different species of the birds that I know. And I went and tried something that I failed at at the age of 21 while in Navy boot camp, I can hear the difference between the dots and dashes of Morse Code. It has to be slow but it is better than not at all.

1 Like

Hi Volusiano,

With respect to point A I went to an EUHA exhibition in Germany in 2017 and I met a Phonak rep who told me that they were unable at that point to isolate what was a “voice” in terms of the incoming sound. He said they had 300 scientists working on the problem. Of course it is now 2021 and there may have been some discoveries since then, coupled with AI and an increase in computational power.

Thanks for pointing this out, @glucas. That’s interesting.

When I was doing the detailed review of the Sonic Enchant 100, which is a sister of Oticon under the William Demant umbrella, they were talking about detecting and analyzing speech signals as if it were the norm. I’m not talking about signal processing on the speech here, just simply detection and analysis only, although they do signal process the speech, too, of course.

The new Philips HearLink 9030 also centers its core AI training around removing noise from detected speech.

The Oticon OPN has a Voice Activity Detector that operates in 16 frequency bands so that it can freeze the noise model from being applied in any of the 16 bands in order to preserve that speech if that speech were to be found in the surrounding areas and not in front.

But then all these companies are sister companies under William Demant, so they probably benefit greatly from being able to share any technology on voice activity detection.

I think the More probably takes it even a step further by being able to not just detect but isolate and rebuild the speech into a discrete component so that it can be more easily manipulated amongst other sounds that are also discretized.

2 Likes

What does Oticon open do that any other aid would do? Would like to know.

Thanks. I’ll watch and read these (disclaimer: I work in AI, but I’m not an expert like some here. So I’m often skeptical when people say AI is the silver bullet)

And with respect to the training data, you need to measure ground truth. It’s hard to know what’s ground truth when humans subjectively want to hear certain sounds over others; do I want to hear bird chirping versus a human speaking? Do I want to hear a car engine noise over distant music over the radio? In some cases, some sounds are more important than others. Beamforming is just a poor man’s manual override to help prioritize what sounds we want. I over use beamforming when in a noisy restaurant or in the backseat of the car.

These subjectivities rely on human labelers of training data to determine what sound is what and what sound is more important than others. The labelers themselves may have bias on what’s important when the user may have a different preference. So the baseline or ground truth is loose like shifting sand ground, and the DNN will amplify such biases. The hope is that the DNN learnings over a period of time overcome such biases.

Absolutely agree @SpudGunner. @Volusiano I’ve learned so much and hope that there’s a real life, safe Meetup one day

@gkumar: This is an excellent point, and well taken.

Of course, there are going to be all sorts of biases inherent in the early days. Perhaps, in the not-too-distant future, we’ll get to choose between the “Nature-lover’s More”, “Foodie’s More”, and “Hobby Machinist’s More”.

Perhaps we’ll even get to change learnings via an app!

But - for the time being - let not the perfect become the enemy of the good!

3 Likes

Most that I have talked to at the VA clinic could care less about the technology involved with hearing aids. What they want is to be able to hear, but not hear their tinnitus. So many have PTSD, hearing loss and tinnitus added to their PTSD in a horrible way. I have over the years talked to close to a hundred, none will agree on what sounds the best for them, some only want to understand speech, some are so much loners they could care less about hearing anyone. To some it is music that helps them deal with their PTSD, others it is the tinnitus going away that means the most. Others love family, friends and talking things out.
What I am getting at is what people want out of hearing aids is very personal. And as I have said so many times. You can have 4 people with the same audiogram and they more than likely will disagree on what sounds right to them, and what aids work for them. And very few of those veterans even have smartphones, computers or care to have them. I am sure different areas see different results, but so many of them I have talked to just want a very simple life, and don’t care about the technology even nowadays.

2 Likes