Oticon More "deep neural network" processing: Does it WORK?

I really don’t think Oticon uses any human labelers to determine which sound is more important than which. That would be a futile exercise in subjectivity in the first place anyway. For somebody who’s out on the street walking, maybe the sound of approaching cars is much more important for their safety. For another person nearby sitting at a bus stop waiting for a bus, they’d probably rather not hear approaching cars and consider it noise, at least while they’re sitting at the bus stop being glued to their phone doing whatever with it. But while they were walking to the bus stop just minutes before, the same approaching cars noise was probably very important for them to hear.

That is why Oticon attaches the brain hearing concept to go hand in hand with the open paradigm, so that it’s up to the HA wearer to use their own brain hearing to decide which sound is more important than which at any given time so they can focus or ignore. The only universal acceptance is that speech is more important than other sounds, well, at least most of the times.

So as far as I can tell, the Oticon DNN whitepaper never mentions any training data labelers or biases to non-speech sounds. So there’s no worry about whether sounds get labelled with the wrong biases, except for a bias for labelling speech. And even then, the user is given full control of how they want to label their own level bias toward speech via the controls given to them in Genie 2.

2 Likes

Or as my Audiologist and the Oticon rep to the VA here said the paradigm is to enable our brains to again to function the way it was designed to for hearing, and to not work against our brains hearing functionality.

The objective function of the DNN can also have human bias too. Knowing what is speech and what is not can also be biased. And definitely which variation of sound to elevate over others and testing it with human beings to define the DNN also has bias as I am not sure that speech detection is so fool proof. And choosing which combo of sounds with speech and non speech is preferred to listeners can also have bias.

And I am very sure that there are human labelers for the training data to help determine ground truth, and to decide which version of the DNN reaches the objective function more closely. It’s an unsexy non marketable version of AI so I’m not surprised that there isn’t a mention.

I agree that we shouldn’t let doubt and perfectness be the enemy of being good. I am merely saying bias exist somewhere in AI upstream or downstream in the DNN lifecycle. I’m excited to test the Mores once insurance approves it and I heard disposable versions are coming out this August from the Audiologist.

Well, you’re definitely entitled to your opinion. To me, it’s really irrelevant whether they actually have it or not, because even if they do, they’re not sharing it with anyone anyway. So to pose that question here in the forum (“how its training data labelers are verified to remove bias”) is futile, unless there’s actually an Oticon rep participating on this forum who’s privy to this type of information and willing to share it with us here. I’m not aware of any such person.

3 Likes

You’re right and you too. If it’s not open to us normies, then yes, it’s not worth debating over information that we lack.

I do feel in the long term as other AI solutions come out with other brands, it will be harder to understand objectively which is better or worse because the bias, metrics and objective functions aren’t known, and we normies will have to test or trust the brand to determine our future aids of choice.

1 Like

Sounds exactly like what we have to do anyway.

I wish you all would quit bumping this thread. Next thing you know I’ll end up with a pair or More 1s and return my Jabras. :open_mouth:

4 Likes

:joy:Hahaha! Good one, @jay_man2!

One of the problems here is the audiogram. It just doesn’t reveal much about auditory processing. Clearly there’s a lot more involved in speech recognition than the ability to hear high frequencies. Thus getting a hearing aid to compensate for hearing loss (which may involve more than the cochlea) by making the curve as flat as possible (and whatever else is currently being done) is crude and surely flawed.

My audiologist offers a one-month trial for $125 (I imagine to compensate him for his time to perform the REM).

My hearing loss “is not that bad” according to the audiogram. But how accurate is the audiogram in terms of representing an individual’s ability to discern speech? I suggest that it really doesn’t offer much data!

I recently came across an interesting study suggesting the following:

Plenty of people struggle to make sense of a multitude of converging voices in a crowded room. Commonly known as the “cocktail party effect,” people with hearing loss find it’s especially difficult to understand speech in a noisy environment.
New research suggests that, for some listeners, this may have less to do with actually discerning sounds. Instead, it may be a processing problem in which two ears blend different sounds together—a condition known as binaural pitch fusion.

I think you may have a point – it’s what an audiologist working for Cliff Olson (the YouTuber audiologist) told me, i.e. the the fitting is much more important than the hardware.

1 Like

If he said that this charge is exclusively to do the REM, maybe you can suggest to him to let you try the More 1 without doing any REM first for free. If you find that the More is not as you had hoped for, then it’s still a free trial for you. If you like it enough to want to commit the $125 for REM next as part of the trial (maybe 2 weeks into it or something), then at least you have some ground to invest in the $125 for REM to make sure you’re completely happy with it first. Should you decide to purchase it, you can negotiate ahead of time that the $125 for REM that you already paid for be taken off the purchase price.

As an IT professional, I don’t believe that there’s any DNN in the Mores that is able to learn new tricks. That would require computing power that is (currently) well beyond the capability of any chip able to be contained within a HA. Oticon may well have used an external DNN to sample a large range of sounds in order to refine the sound processing algorithms in their HA chips, but that’s an entirely different matter.

And I must say that my audiologist, who was willing to sell me Mores if I wanted them, is also sceptical and just regards the claim as further evidence of Oticon’s skilful marketing.

4 Likes

Well, @KiwiJohn, your jury of 2 had better tell that to friends that I’ve known for 20 or 30 years who’ve spontaneously asked me why my hearing is better today than at any other time that they’ve known me.

[Ive been wearing skillfully-marketed Oticon More HAs since March 2 of this year.]

@SpudGunner I have no doubt about the improvement in your hearing. My doubt (and my audiologist’s) Is simply about the More aids incorporating a deep neural network that that can dynamically learn.

@KiwiJohn: Of course More cannot dynamically learn! And nothing published or promulgated by Oticon that I have read has even remotely suggested that it could.

This is just BS fabricated by Oticon naysayers to besmirch More and the concept of replacing engineers’ algorithms with sound sample “libraries” (that permit the dynamic comparison of sounds in real time).

[PS: No need for the “DNN for Dummies” link - if you’d care to give Oticon’s professional whitepaper links a fair reading and subscribe to Audiology Online, you’ll have a much better grasp of what the company has done with the technology.]

1 Like

To be honest, I don’t know the justification for the $125 fee.

Does it really make sense to try hearing aids without REM?

The problem isn’t always high frequencies, I lost mostly I’m the middle range, 1500 to 3000. This is a very hard range of frequencies to correct for. Seeing that hearing aids are mostly designed for high frequencies loss.

There’s a possibility that REM result may turn out to show that your actual gain curve is fairly close to your target curve, in which case very little adjustment would be done. In this case, paying $125 to have just a tiny bit of REM adjustment done is not going to significantly make things much different for your trial anyway. With your hearing loss not terribly bad in the first place, the chance of REM result not requiring too much adjustment is probably higher than somebody with more significant loss and more complicated fitting issues.

If you knew how much REM adjustment had to be made on your OPN 1 to match the target curve, this may help give you an idea to guess how much REM adjustment you may need for the More 1.

If you trial the More 1 without REM and find some improvement but still find something a little lacking, but you deem the improvement is worth investigating further, then you can pay $125 to have the REM done before the trial ends to see if it’d get even better or not. But if it blows you away even without REM, then you know that it can’t be worse if you purchase it and have REM done with it, unless the actual gain curve overshoots the target curve and REM dials it down (more rare chance). In which case if you prefer the result without REM then you can just have the REM adjustment undone.

1 Like

Perhaps, but I am relying on the prescription of my audiologist. BTW, are you an audiologist?

@SpudGunner Thanks for confirming my doubt that Mores can dynamically learn. My first post was simply in response to gkumar who wondered if they could.