Oticon More "deep neural network" processing: Does it WORK?

So, the jaded realist in me has to ask: anyone here have any REALLY good experience with Oticon More aids? I read here (article dated on my bday, so I feel like it’s relevant to ME, LOL) that the Oticon More’s “deep neural network” has like 12 million+ real-life sounds that it can work around to help a user just UNDERSTAND what the person talking next to them is actually saying?

But after having read this kind of bluebirds-and-rainbows promises for decades, I just want an honest answer from a current user.

I’m thinking of trading up my Phonak Marvel aids (now 3 yrs old) for the latest and greatest in SPEECH in NOISE kind of hearing aid. Granted I’d like other basic things that folks with normal hearing take for granted (dynamic range of music, hearing on any kind of phone out there, how about understanding folks while hiking in single file on a windy day?).

Not a fan of rechargeable aids tho, as I’ve been in situations where I’d need to carry backup aids that use a 13 battery.

Any feedback would help a LOT as I’m going to get in to see my audi in the next few weeks. :thinking:

1 Like

I sure believe the way Oticon has it setup that it is working great
The network is preloaded with about 12 million different sounds, and I find the aids to have a lot better clarify than the OPNS1 aids. I have been in a number of noisy locations and can still carry on a conversation with a table full of people. And I can also understand people with mask on as long as they aren’t really soft spoken. But it do have have trouble with fast talking people.
The aids don’t have the capacity to add to its network it is preloaded only.

^^^ Hmmm… you’ve piqued my curiosity! I’ve worn both Oticon and Phonak aids exclusively the past 15 yrs, but the Oticon OPN is what DASHED my confidence in that brand for 6 years now. I could simply never adapt to their philosophy of “Let’s utterly overwhelm a STONE DEAF person with a bazillion noises and let their own stoopid, aging brain figger out what’s being said, heh-heh!”

So I switched to Phonak (Audeo, now Marvels) and find that this is a manufacturer of aids that has ME - the stoopid, slow-brained, STONE DEAF person in mind alright! The sound quality is exactly what I like in a pair of aids: rich on the lower freqs, enough crispness on the high end to help process speech, but granted, if I’m sitting in a noisy place at a table of several people, well… that kind of situation is still a Holy Grail for hearing aid performance.

But every 3-4 years, I read up on the latest models, go to my audi, order the aids, await them with the excitement and anticipation of a kid on Christmas… and then when the aids come in, first thing I notice is they sound HORRIBLE. And thus begins the endless process of follow-on visits where I (mere human) tell the MACHINE (smarty-pants hearing aid and the programming software) what kind of listening experience I actually want as opposed to what’s recommended for my loss curve.

1 Like

I have worn Oticon aids for the very 11 years and absolutely love hearing everything that I possibly can. I can now hear my wife and others talking behind me. I can now locate where sounds are coming from, I feel so much better walking the trails, and walking in a store because I can hear someone coming up behind me. I can be in a meeting or a lecture and don’t have to turn totally around to hear someone behind me. Yes I still have issues with some words and always will.

Just a clarification that the More DNN is not really preloaded with 12 million different sounds. It’s actually 12 million different sound SCENES. Sound scenes are various sound “environments” which can be simple or complex, indoors or outdoors, with maybe one or two sounds to dozens of sounds going on.

And the sound scenes are not preloaded into the More. There’d be no room to load that kind of data anyway. What they did was they took a special “globe” device that has microphones on it in all different directions in 3 dimensions. Then they record a sound scene (not sure for how long), then another, up to 12 million sound scenes.

Then in the lab, they created an initially very crude deep neural network model. This is the secret sauce here. We don’t know what the neurons in this deep network represent, and how wide (how many neurons) each layer has, and how many layers there are in the network. And each neuron is assigned weights and biases that are basically can be mathematically manipulated.

The DNN is basically a network where the sound scenes are input into it one at a time, cranked through it through the initially assigned weights and biases associated with each neuron (which represents “something”, probably some kind of detector and/or analyzer and/or sprectral representations that break down of the input sound scenes at the various levels of the network -> just my guess here). The output is a crude “recreation” of the sound scene that was input, but its outcome is probably very different initially from what it should be (the reference sound scene). So they feed this difference back into the network and do mathematical manipulation of the weights and biases of each neuron to minimize the difference between this crude outcome and the reference (good) outcome. Through this cycle, the network is trained to be improved.

Then millions of various sound scenes get sequentially fed in, and through each of these cycle, the network gets trained through the process described above, to be more and more accurate over time, such that eventually it gets good enough, or maybe pretty darn good.

Oticon then loads this final “trained” network with all the smart weights and biases for each neuron that have been highly fine tuned through each cycle, into the More chip. Now you can feed through it any new real live sound scenes (other than the 12 million that were captured), and this smart network knows how to manage this sound scene accurately. In the process, the network can also break down all the sounds of any sound scenes into various sound components, and this allows the network to manipulate them much more easily to achieve a “balanced” sound scene in accordance to the user settings, and in accordance to the classification of that sound scene by the network (simple or moderate or difficult, etc).

Sorry to be long winded, but hopefully this gives a glimpse into how the nuts and bolts are handled in the More DNN, albeit still at a very high and abstract level at that.

7 Likes

That really is what I meant

2 Likes

YE GODS! That may be an exhaustive explanation, but MUCH appreciated! Now I get it (at least more than half an hour ago). It seems the crude neural networks of sound scenes are tinkered with, put on the chip but … don’t tell me Oticon is going to have the last laugh on ME again, expecting my own STOOPID brain to comprehend simple speech in a sound scene that adds MORE noise and confusion to the reality I’m muddling through!

I dunno. It seems so simple a request to just get aids that would help one distinguish, comprehend, know what a person is saying in perhaps a variety of settings (noisy, windy, not acoustically optimal).

2 Likes

GEEZ. If I’d had your experience with the OPNs I’d have been one happy camper. For the agonizing 9 mos I wore them, I could not understand ONE THING a person said facing me if there was any kind of “sound scene” with more than a whisper. That included stores, doc’s office, outside, inside, well, ANYWHERE. I finally gave up and chalked it up to how I must be processing sound from the world around me. Would I LOVE a pair of aids that make everything as effortless as you describe in all environments! :upside_down_face:

Let me say this, the More aids are great for me because i want all of the sounds in my environment that i can hear. They aren’t for everyone. I do volunteer work at the VA Audiology clinic and the Audiologists say the More aids are working for all that adapted to the OPN/OPNS aids but a lot that are use to other hearing aid brands aren’t adapting that well to the More aids.

1 Like

^^^ OK. GOT IT! Now I can rest in peace … and perhaps pursue the latest Phonak model.
Much obliged!

I think if you struggled before with the OPN and found much relief in the Phonak Audeo and then Marvel, I think you’ll fare better with the Phonak Paradise instead of the Oticon More. The More is just a more advanced extension of the OPN in terms of the open paradigm, and the open paradigm won’t necessarily work for everyone. Folks who need very high SNR to understand speech better and can benefit more from the blocking of noises (probably like yourself) will find the Phonak line working better than the Oticon line.

With the Oticon line, you need to be able to accept lots of sounds at once and even with the help from the HA to clarify speech for you, your brain hearing still needs to do work to process and filter out what you don’t want to hear and focus on what you want to hear. With HAs like the Phonak, you get more help from the HA to block out the noise so that your brain hearing doesn’t have to do this work. Yeah, you don’t get the hear the noise like how the Oticon folks get to (and want to) hear them, but you probably don’t care to hear them anyway if they tend to overwhelm you most of the times in the first place.

So while the More DNN works, it’s still not for everyone. It just works better than the OPN. For some folks, it’s only marginally better than the OPN. For others it’s leaps and bounds better. Yet for others, it doesn’t work for them just like the OPN didn’t work for them.

5 Likes

I loved my Oticon S-1 they were great, but I needed to upgrade do to more loss in my low frequencies the Oticon More 1 didn’t dot it for me, it just didn’t give me a round fuller sound and I struggle with me and in loud environments so I am trying the Phonak Naida Marvels - and boy it make a big difference in how I hear low frequencies. I am a fan and will stick with these even though they cost more the Oticon More 1’s. That’s just my two cents.

We all have different hearing loss and different needs, it is all about what works for you. The More1 aids work so wonderfully for me, the Phonak Marvel didn’t. I didn’t like the closed directional feeling I got with the Phonak aid. I love the open surround sound that I get with my More1 aids. Like I have said before the open sound isn’t for everyone.

1 Like

I have to agree with you I think it the open sound scape that I couldn’t get use to.

@1Bluejay, in my experience (I had Phonaks, Oticon S, S1 and now More1), the More’s have a more sophisticated way to handle sounds and are able to give you a more realistic soundscape that reproduces more sounds around you instead of blocking out non-speech sounds.

For example, with the S1’s there was no fan noise from the refrigerator (it must have been suppressed i guess since it wasn’t speech). With the More 1’s, there is a fridge fan noise! At the beginning, your brain over-focuses on it but then over time it fades away. If you focus on it, it still there. Now, you might wonder “why do I want to hear the fridge fan?”… the answer is, maybe you don’t but you do get a more complete awareness of what is around you….

Now, with the Phonak’s there were situations where my Phonak’s would focus on someone’s voice and drop everything else out. That was sometimes good, sometimes bad. In the car over the road noise, i would pretty much hear well. That was good! If someone would speak to me from behind in a noisy situation… it was like they weren’t there. Bad in social situations. Now, at least, i know someone is talking and I turn around to either respond or ask them to repeat….

Just to second what @Volusiano said… if you like the focus on voices and that helps more than hearing other things, you might be happier with the Phonaks.

At least that is my experience….

Good luck.

5 Likes

There isn’t a right or wrong answer for everyone, there is only what is right for the individual, you. In my volunteer work at the VA clinic I have noticed patients with almost identical audiograms that totally disagree on what hearing aids work for them. There can be many reasons for it. The amount of time without having hearing aids when needed. The environments they live in, work and just enjoy being in. It also seems to do with other things like allergies and sinus issues, and some case other illnesses and even other experiences, think about their military service experiences. So there isn’t the need to argue about the best or the worst, only state what works and don’t work for you, let everyone determine what is the best for their needs and desires.

4 Likes

This begs the question: what is the neural network being trained to do? (i.e. what is a “reference good outcome” against which the outcome from the neural network is judged?) Reproduce voices amidst various kinds of noise?

1 Like

Of course this is highly proprietary information for Oticon, and naturally even the whitepaper doesn’t go into this level of details.

But if I have to make an educated guess, the captured data is 12 million sound scenes, so each sound scene has to be used as input as well as the golden reference to compare the DNN output against, because after all, that’s really all they have, the 12 million recorded sound scenes.

So my guess is that the DNN is trained to break down the sounds in a sound scene into discrete sound components, then rebuild the sound scene as accurately as possible compared to the original sound scene that was fed in as the input, but this time the “rebuild” is no longer in the form like the original aggregate, but now using the discrete sound components to rebuild it instead, and recreating the appropriate volume levels, as well as localization of all the sound components, and whatever else is involved.

Again, this is only a guess on my part, it’s not what Oticon has revealed. They’re naturally very tight-lipped on this. But it seems consistent with the idea that they want to be able to break down, isolate/localize and identify sound components in a very discrete way, so that they can manipulate and rebalance the sound scene any way they want using these sound components (per the user’s settings in Genie 2), to prioritize speech, yes, but also to ensure that all the sound components can be heard if listened to, in order to fulfill the promise of the open paradigm.

Back to the DNN training, this “rebuilt” sound scene (which is the output of the DNN) is compared against the original/golden/reference sound scene, and the differences between the two are measured. Then this data (of the differences) are propagated back into the DNN, and the neurons’ weights and biases are mathematically and recursively re-adjusted to values that would minimize the differences that were back-propagated, then the whole thing gets propagated forward again to arrive at an outcome that yields the least/minimal differences against the golden reference, although in the beginning with not enough training data going through yet, the differences are probably still quite large because the DNN is not quite fine tuned yet.

Then the next sound scene is fed in, and the process above is repeated. You do this enough time, eventually the neurons’ weights and biases can be tweaked enough (but hopefully tweaked to a lesser and lesser degree each time) to generate an accurate enough outcome in the end for all the trained data.

However, if your original DNN structure is not well designed in the first place, then the result is still no good in the real world. One version of the DNN design may be trained and proven to be very accurate with the training data, but when fed in “unknown” data (which is basically the real world data), it’s still not accurate with the unknown data. Such is the case in the left most graph in the screenshot below. This DNN design version is too specific to the training data but still perform poorly on the unknow and therefore not acceptable.

Another version of the DNN design may not become very accurate no matter how much you train it (because it’s a bad design in the first place). In this case (the middle graph in the screenshot below), it ends up not being very accurate with either the training data or the unknown data. So it’s considered a design that’s too ambiguous and is no good either.

The whitepaper mentioned that Oticon actually had to play around with several different versions of the DNN design, and went through the testing phase to scrutinize and find a design would be optimally and adequately accurate enough for both the training data and the unknown data, and chose this to be the final version of the DNN design. This would be the third graph on the right of the screenshot below.

Needless to say, this implies that not all 12 millions sound scenes were used to train the DNN. Part of them were used for training, and the remaining were used for testing, as the “unknown” data.

5 Likes

@Volusiano: This is a really good explanation of how Oticon trained their DNN.

AudiologyOnline has available a good talk by Donald Schum of Oticon entitled: “The Audiology of Oticon More” that goes into greater detail, for those who are interested.

My experience with wearing Oticon More since March 2 is that it’s highly discriminant: it recognizes speech and amplifies those sounds. It recognizes noises, and attenuates those. It allows sounds that are recognized as neither speech nor noise (squirrel chitter, babbling brooks, appliance signal beeps, etc) and permits them to occupy the sonic space between speech and noise. [These are the sounds that most makers have decided to minimize, along with noise.]

The result according to my perception of sound - is that the SNR of speech to noise is sufficiently high to permit my brain to decode the damaged signal transmitted to it by my More-aided ears and actually understand what’s being said.

My explanation is not technical: it just represents my subjective experience of the process that @Volusiano has so succinctly described.

As indicated by Chuck @cvkemp, the results may not suit everyone - TMI.

2 Likes

I am very surprised to read what you write about low frequencies. I wear Opn1, I have a worse loss than yours on the low frequencies and with More I hear much better words in noise; I have a custom 105 db mold. And with Phonak I hear less distinctly. And the audiologist told me that one RIC was enough for my loss. It seems to me that the Naïda is for the deep losses. it surprises me that you need to wear one.