Oticon Announces Oticon Intent™, the World’s First Hearing Aid with User-Intent Sensors

My HA’s are provided by workman’s comp. I am very fortunate that the dispensing audiologist made a case to get them early because of an unsafe condition at work with the 2 year old hearing aids I had. I think that part of the decision was to provide rechargeable batteries to save replacement cost of Paradise batteries. Otherwise HA’s are provided after 5 years, and they do not dispense the top-of-the-line HA’s normally. So I’m very grateful.
They used to limit hearing aids to 3 manufacturers. I’m lucky that Phonak was one of the 3, because I’m familiar with them.

You bring up a good question, and before I try to give my opinion about it, I want to make a distinction between individual speech and babbles, especially if the place is very crowded with people and everyone is talking at the same time. Restaurants are a good example, but an even better example is perhaps in the outside waiting hall of a theater before they let people inside, or during intermission where the ceiling is high and everybody is packed together. In this case, the babbles are part of the cacophony of everybody speaking at the same time causing a loud, noisy droning sound. The people standing next to you may sound more like speech than part of the babbles. In this case, babbles are treated as noise because they lack some of the key characteristics of speech for the voice detector in the Oticon to be able to discern as clear speech.

In the example that you give, let’s say it’s more about multiple speeches around you, already discounting any babbles that are already treated as noise. If you strongly prefer speech in front only, or speech where your head is pointing, in a multiple-speech situation, you’re probably better off going with a non-Oticon solution where the front beam forming can be more aggressive to block out surrounding speeches that are not in front. But the Oticon aids do have a Full Directional setting to activate this front beam forming mode as well. But this will take away the accessibility of sounds per the Oticon open paradigm, unless you create a new program for this and go to this program only when you need it. The newer Phonak model like the Lumity now has technologies like the StereoZoom and Speech Sensor that can open up the field of the beam form from narrow in front to wider to pick up speeches on the side and back if speeches there are detected. So it can become more open, but only if speeches on the sides or rear are detected, it’s not always “open” like with the Oticon “open” paradigm.

I remember viewing an Oticon technical presentation in the past given by Donald Schum (their ex VP of Audiology), when they just came out with the Oticon OPN and started promoting the open paradigm and the concept of brain hearing. Brain hearing is nothing new or special that Oticon created, it’s just a way for them to say “don’t be afraid of the open paradigm, you won’t be drowned in all the sounds we let in and can’t understand speech anymore, your brain hearing will learn to differentiate and focus on what you want to hear and tune out what you don’t want to hear, and we’ll also do something special to clean up the diffused noise embedded in the speeches so you can understand the speeches better”.

Anyway, back to this Oticon presentation that I watched on AudiologyOnline.com. The point of this presentation was specifically to address multiple speech cues. And the point is that multiple speeches situation is not a bad thing either, and if your brain hearing is already accustomed to the open paradigm, your brain hearing will be able to discern the differences that exist in the different speech cues and focus on the speech you want to hear and tune out the speeches you don’t want to hear, just like how your brain hearing does it with.

I think the introduction of the Intent may give people a misguided notion that Oticon can now read people’s mind and Oticon can now focus on the speech they want to hear and tune out the speech they don’t want to hear just by some simple head movement. But I really think that some simple head movement is not equal to reading people’s mind to be able to cherry pick the speech they want to hear and block out the speeches nearby they don’t want to hear. I find it more believable that certain head movement equals a general signal to focus on boosting the speech SNR contrast over all that applies to all speeches coming from anywhere, and the brain hearing is still left to discern and focus on which of those speeches they want to hear.

But the brain hearing only has to do this work if the speeches happen AT THE SAME TIME. In your described scenario, if the waiter approaches you from somewhere and started talking, you’ll hear the waiter’s voice, turn around (thereby maybe boost the speech SNR contrast even more with this head movement), and hear the waiter’s voice better than when he started speaking. But at the same time, most likely your wife will stop talking to you anyway, so your brain hearing wouldn’t have to work to focus on the waiter’s voice and not her voice anymore. But if there were 3 people at the table and your wife keeps talking to the other person while the waiter is talking to you, then your brain hearing will kick in and focus on the waiter’s voice and tune out your wife’s voice.

In the case where people next to your table are talking loudly and they’re picked up as speeches and not babbles that get attenuated, then I think you’ll probably have to use your brain hearing more to tune in on your wife and ignore the other speeches. If you don’t like your brain hearing to work that hard, you can switch to an Oticon program where you have already set the directionality to Fully Directional. Or use another brand of hearing aids that doesn’t subscribe to the open paradigm.

4 Likes

@Volusiano

Your Oticon response helped me understand why my Paradise P90’s worked so poorly in noisy babble. I had just had them set up by a wonderful practitioner almost a week ago. I didn’t understand a word using autosense, and received strong criticism later for not knowing she had talked 6 feet away in this store filled with babble.
I find it difficult communicating to get the results I need. Just want my Paradise P90R’s to work. I want to hear.

Dave

Surely a 1 dB improvement (without the sensor) is inaudible?

A 1dB improvement in signal to noise ratio can yield up to a 10% improvement in speech intelligibility.

3 Likes

Posted the new Philips 50 series whitepaper here: Philips HearLink 9050 to hit Costco soon, similar tech to Oticon Intent?

5 Likes

thanks for your opinion you are surely right and as I already wrote, I need to test them if they will do what I want. I don’t want a silent situation and only hear that what the speaker in front of me is saying.
What I don’t want are scraps of words in the ear

Nexia was like: how was your laughting day? did you don’t tell me bullshit enjoyed the movie? … nah this is what i don’t want … of course the laughting or the “don’t tell me bullshit” can be hearable but it should not be louder than the speaker i am listening.

I had 7 year oticon I know how they works and they works fine for me. (the Real had some strange moments and for sure they may not be better then Genesis but maybe the Intent are)

I think 1 db is the generally described as the smallest discernable change in the perceived loudness, so it’s not an inaudible difference. Glad to hear from @Um_bongo that it makes a 10% improvement in speech clarity.

3 dB is considered to be a material, but not dramatic, change in the perceived loudness.

1 Like

I think you missunderstood me, I said the Signia 7IX was worse in this situation and I don’t think that they technology will help a lot. I didn’t want to say that everything is bullshit.
I think and hope that the Oticon Intent are much better, but of course everyone has to test it first, because it is not like glasses where only the design is a criteria.
Out of 10 persons, maybe 3 are happy with Phonak, 3 with Oticon, 2 with Resound, each one with Signia and Widex, you can say Oticon are the best! but you cannot say Oticon is for everyone

1 Like

Thanks for your comments again. We all strive for improved performance from our hearing aids.
I used to build Heathkit stereo amplifiers. In that day if I didn’t understand what was said or sung I turned up the volume.

I’ve used the “mask program” that Dr. Bailey shared some time ago.

It’s made a real difference for me as I impatiently wait to solve the setup issues and improve my Paradise P90’s performance.

I don’t diy so I used the APP. Clarity and volume increase makes a world of difference.

edit: Went in for a tune up. Fingers crossed. Lets see how the Paradise HAs work

So the best way is front focused microphone without any ultramodern whistles and bells?

I think the best way is up to the individual to decide. The best way for somebody who doesn’t care to hear anything else but the speaker in front is front focused beam forming. The best way for someone else who prefers more environmental awareness and is willing to let the brain hearing do the work to differentiate and focus and filter is what Oticon offers.

But putting semantics aside (in the way you ask the question), I think that if you can align the 4 mics on the 2 hearing aids to beam form toward the front, in theory you should be able to also realign those 4 mics to change the beam form to the side or the rear if you want. I think this is exactly what Phonak did when they introduced their Speech Sensor technology. Below is the screenshot describing their SpeechSensor feature. It locates where the dominant talker is and have AutoSense adjust the mic mode accordingly to realign the beam forming from wherever before to the new location. So the beam forming now is focus to wherever the speech is, not just to the front only. But in the event that all 4 of the speeches in the example below talk AT THE SAME TIME, I assume that the beam forming may be disabled altogether to let all speeches (plus the noise now) come through.

I’m sure if Oticon wants to do this, they probably can, too. But their open paradigm directs them to take a different approach. The MVDR beam forming they use is mainly designed to attenuate the noise sources, not to create a directional beam form to focus on the speech source. It’s kind of an inverse approach of frontal beam forming for speech approach, in a way.

As you can see in the Oticon illustration below, the gray area is their open field where the sounds can be picked up, and the clear areas are where the attenuation of the noise is done via null directions.

image

In the screenshot below, which is a Phonak StereoZoom illustration, the gray area is a narrower beam to focus on the speech, so it’s kind of the inverse of the Oticon MVDR beam forming in a way. In this case, IF the noise detection is increased, Phonak automatically and gradually switches to the StereoZoom mode where the front beam forming goes from fairly narrow to ultra narrow. It’s not like an abrupt change going from a regular program to a speech in noise program. The user is probably already in the Speech In Noise program already in this case, but the beam forming becomes ultra narrow if the noise gets more intense, then relaxes as the noise gets less intense.

Anyway, back to the Oticon open paradigm situation, the beam forming focus is never about doing it for speech, but more like doing it to attenuate noise. Everything that can be heard is made available, but the sound components are rebalanced to favor speech. That’s why with the introduction of the Intent, I think that it gives the false impression that the 4D sensor approach now focuses on beam forming for this speech and that speech. But instead, I think the 4D sensor approach is more about detecting head movement (plus the other 2 already existing sensors) to help decide on when to boost all speeches further or when to keep the sound field more equally open and not boost speech that much.

I think it would be overthinking to interpret the 4D Sensor technology as some kind of mind reader to know what the users really want to hear in order to tell the aids to suppress one speech over another, if they happen to occur simultaneously. I believe your brain hearing would still have to do that work like before, except that now speech clarity can even be better than with the Real, as much as 4 to 5 dB better than with the Real depending on the guessed intent. The 4D sensor is more about giving the Oticon Intent more inputs from the head to know how to rebalance the sound field (hopefully) even better than before without it, specifically in regard to speech clarity.

3 Likes

No love for Starkey? :grin:

I already had a post to answer this earlier in this thread, but in light of the new info posted by @AbramBaileyAuD for the new Philips 9050 announcement, along with a link to a whitepaper), Now that I got a chance to review this new Philips whitepaper, I want to add some more comment to the question.

It seems like it’s consistent with my suspicion that only the accelerometer technology is made available to the Philips 9050 (the new 4D Sensor feature in the Oticon Intent that the Philips 9050 now calls it SoundGuide).

But as far as I can tell from the Philips whitepaper, the Philips core technology (they call it AI-NR) is the same as before. I don’t even see any mention of an AI-NR 2.0 (compared to the mention of the DNN 2.0 from Oticon). So there’s no reason to believe that by sharing the same new Sirius platform between Oticon and Philips that the Oticon DNN 2.0 is now shared with the Philips 9050 aids. And there’s reason to believe that the Philips AI-NR core technology remains the same. Only the accelerometer technology enabled by the new Sirius platform is added to the Philips 9050.

However, the Philips whitepaper provides and interesting glimpse into how the accelerometer movement is interpreted and manifested in the sound processing decision by the Philips aids. It’s rather a simpleton-like interpretation to me → no motion, occasional motion, or steady motion, that’s it. Based on these 3 very crude classifications, the appropriate action is taken to match the speech SNR contrast. If the head is still → boost front speech contrast and keep front directionality. If head moves a little → do nothing different than before. If head bobbles → boost all surrounding speech contrast, probably by way of removing the front directionality.

Because the Philips aids do not prescribe to the open paradigm like Oticon, it does do use more directionality to control noise reduction, beside using the AI to clean up the speech. So that’s probably why we don’t see this same chart above for the Oticon, but we see the chart below for Oticon instead, where there’s no mention of directionality use.

1 Like

(This is a misunderstanding of what Oticon does. But you knew I’d say that.)

But I agree with you that the “4D sensors” are probably just increasing and decreasing the strength of the “neural nosie suppression” depending on whether you are moving or still, rather than doing beam forming.

2 Likes

Haha, yeah, we’ve been over this before. I respect you clarifying your position again. Thanks.

Oticon does not help you do any of the brain hearing, you have to do it yourself. And Oticon doesn’t invent brain hearing either, it’s always been around and used by everyone, normal hearing or hard of hearing people. Oticon just makes a big deal out of those words.

And my position is that Oticon’s choice to use the open paradigm necessitates their need to bring up the concept of brain hearing (not invent it or empower it) to justify why it’s OK to promote the open paradigm because it can still work for many hearing challenged folks.

2 Likes

Oticon’s hearing aids are leveraging a huge amount of processing to try to support a damaged system and help your ‘brain hearing’. It is not the case that they are doing less and requiring your brain to do more than other hearing aids. They are not saying “don’t worry, your brain can do it,” they are literally saying “we have analyzed and rebalanced the sound scene for you and increased the signal to noise ratio so that you can hear better because we know that your damaged auditory system can no longer do this on its own”.

4 Likes

I hear your point @Neville when I switch back and forth between my P1 and my music program. It is very apparent that my Oticon More does a lot to help my brain while I am on P1. I have a question, though. A stripped-down music program (with everything possible minimized or toggled OFF) would be closer to @Volusiano’s brain hearing concept, or do even those “naked” programs still have a lot of processing going on?

1 Like

In the white paper they stated that AI-NR in Philips HearLink 50 were improved/enhanced with higher resolution, as in new Otion Intent DNN

2 Likes

An Oticon Intent wearer who understands how the aids react to head and body movements will be able to purposefully influence what the aids do at a given moment. That could be a good thing and/or it could look weird.