Oticon introduces Oticon More

Hmmm, Sometimes More is Less!
I’ll stick to my OPN S1 R’s.

The power of a deep neural network is quite significant. This technology is already utilized for improving speech in noise in a few mobile apps. A prime example of this is heardthatapp. For a neural network approach to work efficiently, it requires specialized hardware previously unavailable in a small form factor which could fit in hearing aids so the app utilizes the phone’s more powerful hardware. The amount and precision of noise filtering and speech intelligibility is impressive. Unfortunately, there is a noticeable latency, since the sound is processed by the phone and then sent to the hearing aids via bluetooth. Also, the neural network seems to be optimized for separating speech from background noise and attenuating the noise so it may sound isolating.

Oticon’s approach seems to be very similar. Rather than optimizing for just speech in noise, I suspect it considers the entire environment for a more natural experience. What excites me is the size of the data set they trained the neural network with.

Note that both the example app and Oticon More utilize the power of neural networks. A neural network is a subset of supervised machine learning meaning that a model is prepared through a training process in which it is required to make predictions, given input data, and is corrected when those predictions are wrong. This continues until the desired level of accuracy is achieved on the training data. If the training data is based on practical real-world input then we can expect it to perform similarly in a real-world environment.

Some common algorithms for supervised machine learning include nearest neighbors, naive bayes, decision trees, linear regression, singular value decomposition, support vector machines, neural networks, etc. These are different ways of analyzing data and learning. Neural networks attempt to mimic the way the brain learns. A deep neural network is essentially a neural network with many layers used for feature extraction.

Widex Moment also seems to be based on machine learning. Although, it’s not clear which method of machine learning Widex is applying or how big of a data set they used to train their machine learning algorithm on.

2 Likes

I am very interested in these Oticon More, especially for the deep neural network speech.
I have severe deafness and am currently using a Widex Unique Fusion 330 with HP U-FS receiver. Comparing the data sheets of the two Widex devices: http://webfiles.widex.com/WebFiles/9%20502%203791%20001%2003.pdf vs Oticon More 1 Minirite R 105 https://wdhecomcdn1.blob.core.windows .net / damfilesprod / b07d13bf8d25497fa3f6abde006f0e07 / pep / 222836UK_TD_Oticon_More_miniRITE_R.pdf it seems to me that my Widex are more powerful: is that correct?

The greater frequency range of the Oticons is interesting, but I think this is related to the lower power.

Its not clear to me if Oticon has shared its full line of “More” hearing aids yet. What I find odd is that in late November Oticon came out with video’s discussing the “More” aid but seemed to offer limited information. Its as if Oticon wanted everyone to know the “concept” behind the More but not necessarily the nuts and bolts. Usually when a new aid hits the market (advertising wise) you pretty much can get all the details you want. But that’s not the case with the Oticon More aid and I’m not even sure we have a release date. January? February? Later?

Maybe Oticon wants to build some suspense but I’d rather learn everything about new aid once a company like Oticon brings its to public attention. Seems rather half baked to pump a hearing aid platform with on-board Deep Neural Network but leaving other specific details out.

My impression is that Oticon tends to get ahead of itself. Most memorable was the promise of their ConnectClip which I think took over a year to be released from the time of announcement. However, I think all hearing aid marketing tends to be more hype than real data.

2 Likes

That is marketing for you, as an engineering tech I saw it all the time, marketing trying to reengineer the products without knowing what they were even talking about.

1 Like

I think there’s a lot of Dilbert comic strips with this general theme.

2 Likes

Does anyone have the same issue as I have? -> when trying to click on links that start with www.oticon.global/… (where there’s supposed to be further documentation on the Oticon More), you get redirected to the www.oticon.com website?

Anyway, I was able to find a different non-Oticon website that has links to some of the Oticon More whitepapers. One on the Concept brochure, one on the Polaris platform, and one on the MoreSound Intelligence that goes in-depth into the Deep Neural Network portion of the More. Below are the links for whomever is interested in reading up more on them.

The 2 new technologies I see introduced are the MoreSound Intelligence (the Deep Neural Network stuff), and the MoreSound Amplifier (the replacement of Speech Guard LX). There is no separate whitepaper for the MoreSound Amplifier, but the Polaris Platform whitepaper does explain it in some details.

As expected, the name of the game for this new technology is trying to come up with a way to balance all the sounds in a sound scene the way a normal hearing person would hear it, with minimal alteration due to noise blocking or beam forming or one-size-fits-all compression, etc. The idea is that if you can come up with a best possible balance of sounds that mimics real life while preserving all the sounds, then you can feed it to your brain and let it works its “brain-hearing” magic. The MSI whitepaper doesn’t go into the break down of what the neurons and the levels of neurons in the DNN consist of (or represent) in great details, nor can it anyway because it’d probably be too complicated, not to mention proprietary. But it adequately explains the high level structure and process of training the DNN.

One striking observation I have after reading the MoreSound Intelligence (MSI, or the DNN) whitepaper is that the DNN approach seems to allow Oticon to kind of “do away” with the need to try to determine what noise is and apply some kind of noise modelling then noise removal from speech (which is employed by the OPN and OPN S). The DNN approach instead focuses on coming up with the right amplification balance between all the sounds in a sound scene. While they don’t really do noise blocking or noise removal per se between the sounds in the traditional way, they do have a module in the MSI they call the Sound Enhancer that does what they call “noise suppression”. It looks like a bandpass filter of some sort between 1-4 KHz (where most speech cues reside) that lets in more sounds through this frequency window when Detail is selected, and less sounds through if Comfort is selected, and Balanced is the middle setting for this.

The MoreSound Amplifier replaces the old Speech Guard LX that’s been around all the way up to the OPN S. It basically creates 2 amplification paths, a 4-channel path which is good at processing fast, modulating signals like speech, and another 24-channel path good at processing stationary, slow modulating sounds like steady, narrow band noise with little change in amplitude or frequency. After the 4-channel path gets converted back into 24 channels, the 2 paths get fed into a “Compare & Prioritize” module which mixes them up and and gives priority to the dominant sound in one of the 2 paths in the relevant frequency channel(s) of the 24 channels. This 2-amplification path approach is so that even with the presence of a narrow band noise, the narrow band noise will only get priority in a narrow frequency band, while the overlapping speech (if present) will get priority throughout the rest of the frequency spectrum. In a way, it’s designed to help amplify and bring out speech more over noise in the frequency channels where they don’t overlap, as speech tends to be wider band and noise is usually narrower band.

6 Likes

Thanks for sharing and glad to be back in action! Personal I’m excited to try out these hearing aids when it comes to market. I do wonder how the DNN will be programmed for a consistent experience given the dynamic nature of neural networks and whether the day long battery life is sufficient given that my OPN MiniRITE-T aid lasts usually 1.5 days with heavy streaming from all day Zoom calls.

What happens if the DNN is volatile in similar sounding situations?

What happens if they run low? Can we quickly charge in 10-15m like a quick charge on a phone or watch? Is there an option to not use rechargeable aids? The white paper says a total of three hours of charging, which seems similar to an Apple Watch 5 series charge time, 2x longer than the Apple Watch 6 charging time.

Hi,

Did anyone already bump into the brandnew Oticon More hearingaids? I am very curious how these will perform and how these will compare to the Phonak Paradise?

Cheers,

1 Like

The Oticon More hasn’t come to the states yet

check. I indeed see that over here the site also states: soon available.

Hmmm… I was about to start testing paradises in january, but am in doubt whether I should wait for these oticons. The description is promising.

I have the Oticon OPNS1 rechargeable aids and love them. I will not be able to get new aids for some time. I am an American Veteran and have hearing loss that is due to my military service and the VA system here provides my hearing needs.

Sounds like you’re asking 2 questions here, 1 about the consistency of the DNN result, and 2 about the drain of the DNN on battery life.

I’m sure that Oticon would have designed their new flagship model to be entirely usable all day on rechargeable batteries even with heavy streaming. But apparently they don’t seem to mention offering a disposable battery version. Not sure if it’s due to higher drain from the DNN circuitry, or simply because their success with the litihium-ion rechargeable so far has prompted them to simplify their offering to just a rechargeable option going forward. Or it’s maybe due to both.

As for the consistency of the DNN result, the way I understand it, the DNN has already been trained with 12 million sound scenarios ahead of time and its neural connections are already “caked” in, so it’s not like the network is adaptive and continues to keep changing its neural connections dynamically in a self learning fashion everyday. In this regard, you should expect the DNN to give consistently the same result if you were to be in the same sound scene over and over again. The question is how well Oticon has trained its DNN so far. And the answer is that it depends on how much data Oticon used to train it. Oticon has deemed that 12 million sound scenes was enough data to make the DNN training successful enough for it to be effective. I guess we’ll just have to see from the public reaction after it gets released.

4 Likes

I wonder if and how Oticon will release the DNN KPIs such as precision recall, AUC ROC, and F1 scores.

Still waiting for fitting ranges and specs of the Oticon More.

1 Like

I doubt that Oticon will ever publish this level of detail to the public. In the MSI whitepaper, they said that they designed several versions of the DNN, each with its own unique attributes, and they picked the version that gives the best performance on both the training data and the testing data.

The training data is the sound scenarios captured that are used to compare for differences between the DNN results and the captured result, which get back propagated to the network for training by tweaking the weights and biases to achieve the minimum values on these differences.

The testing data is the sound scenarios captured but not used for training, instead is only used for testing. Since the testing data is not used to train the DNN, you can use it to judge how well/accurate the DNN has been trained if it perform just as well on the testing data as it does on the training data, provided that both give a high enough decree of accuracy.

The figure below kind of illustrates this idea. The ultimate test will be when the chosen DNN is released to the hearing aids and actually tested by hearing aids users. Oticon promised to publish the results of the ultimate test in a whitepaper to be published some time in December 2020. I don’t really know how Oticon would be able to quantify the performance of the “ultimate” test, however, since users are only going to be able to give subjective opinions and won’t be able to quantify anything in terms of the DNN accuracy.

Just for reference, the MSI whitepaper was published on 11/9/2020.

You can find the specification for More in this document:

4 Likes

Is there any news about the release date?

In this thread on the forum Upgrading my dependable Oticon opn1 needing advice, the OP said that his audi mentioned that the Oticon More is coming next week. I assume that he’s in the US.

1 Like