DIY - Self Programming the Oticon Opn – How To

To clarify, the podcasts are from the Oticon website and not from the audiologyonline.com website, so you don’t need to login and register for the course like you do with the audiologyonline.com class (such as the “Complex vs Standard Fittings Part 1” class). So for the podcasts per se, just click on the link and select the relevant podcast(s) you’re interested in and simply watch them.

1 Like

but i have low frequency loss and i cant hear podcast even with headphone/speaker with wearing HA. can someone post narrated steps with small tips. i am familiar with audiology words moderately. it will be most helpful.

These podcasts are long enough that nobody would have the time or inclination to transcribe them for you unless they get paid to do it. Maybe you can find some speech recognition app to transcribe them for you automatically.

Perhaps you could contact Oticon. There’s a possibility they might have the podcasts transcribed. Sure seems like a reasonable request to make of a company that specializes in dealing with the hearing impaired.

i am not telling every sentence i mean if reverse slop loss what ever gain allocated to low you should balance that gain propertionately with weight to high like that this is single sentence oticon narrated video for low frequency loss and whole video finished jsut main hook. like super summary for whole video. i know here not audiologiest we are new biee. but if you can narrate video in 2-5 sentence its enough

Well, since I particularly enjoyed and want to share those 2 podcasts I mentioned, I don’t mind summarizing the key points they made in them below.

1) Why Is Noise So Difficult?

In noisy environment, what you’re trying to do is to create a map around you of the activities in the environment that generate sound to isolate different sources of sounds so your brain can decide what to focus on and what to ignore. That’s the basis of the cognitive task of how a normal hearing person processes noise from their target sound. The integrity of ALL sounds is crucial for the cognitive function to isolate and focus on the desired sound to be targeted.

The outer hair cells provide a sharp tuning for the inner hair cells so that when multiple sounds come in, the inner hair cell can make a clear and easy distinction between the sounds to help facilitate the brain’s cognitive function. For sensorineural loss, the loss of outer hair cells dulls the tuning and makes it harder for the inner hair cells to make clear distinction of sounds like before. The loss of inner hair cells next makes it even harder, blurring the sounds further so that instead of recognizing distinct sounds, it becomes a blurry combo of sounds that all meld together.

You can best describe sensorineural hearing loss as the inability to organize sounds. It’s not so much what the patient doesn’t hear, but rather it’s about what the patient can’t do, which is to take all that sound that comes into the peripheral auditory system and separate it into different sources… You can’t focus on what you want to focus on because the auditory system doesn’t allow you to be able to resolve all the cacophony of sounds that are coming in into separate sources.

2) Hearing Aid Technology and Noise

The traditional directional technology (destination based) can help block out sounds in the back and sides and help focus on sound in the front and improve the signal to noise ratio to a certain degree for a patient, but it doesn’t really make the noise go away altogether. The second approach is to give the patient as much information as possible, because as mentioned in the previous podcast, the task of understanding speech in a noisy environment, especially in the competition of other talkers, is primarily a cognitive task. The cognitive system likes to to get as much information as it can get its hands on. The brain, if it can get as much information from the auditory system, can sort through that competition and try to disentangle it from the speech signal of interest. Sensorineural hearing loss is going to put a major limitation on that function, but if hearing aid technology can’t make the noise go away, then the next best thing it can do is to provide the auditory system with as much information, and as CLEAN as possible, and let the brain do what it does very well.

It turns out that the cognitive system is very good at “glimpsing” signals, or looking for gaps between the competing speech to identify that target speech in between these gaps, with all the other cues (visual cues, linguistic cues, situation cues), and linking and putting it together into one overall stream to follow over time. So any amplification strategy that restricts the bandwidth of the device is actually going to decrease the number of opportunities that the listener has to “glimpse” the signal.

3. Fitting Low Frequency Hearing Loss

I won’t go into too much details here, except that the point of this podcast is that for low frequency hearing loss, they have found it more effective to fit the patient with what they can still hear well (the high frequencies) instead of trying to do the conventional thing and try to fit the patient by compensating what they can’t hear (by adding gains at the low frequencies). It may seem counter-intuitive, but they’ve found that this strategy actually works better for these patients when it comes to better speech understanding by focusing on maximizing the patient’s residual capabilities instead of compensating on the patient’s hearing loss.

5 Likes

thanks a lot you given what i want. so its very clear and short

The one thing I can’t reconcile between the thesis in the 2 podcasts 1 and 2 above is this -> if the loss of outer and inner hair cells already impedes the patient’s ability to organize and separate sounds, then what good is it going to do if the hearing aid technology tries to make all the sounds available to the patients? They still can’t organize and separate the sounds made available to them anyway in the first place…

The only thing I can see reconciling this is that it depends of the level of severity of the sensorineural loss in the patient. Sometimes if the loss of the outer and inner hair cells is too far gone, the availability of all the sounds is not going to help them. In this case, I think the destination based approach (directional mode) is probably going to be more helpful to them because if they can’t sort out the different information no matter what, it’s probably better to provide them with limited information (only the one they want to hear). That’s why the OPN open paradigm doesn’t work for everybody. I remember one forum member here who gave her best shot at the OPN for 9 months and still didn’t find it helpful for her, but after switching over to the Phonak Audeo-B Direct, or even with her old Oticon Alta 2 CIC, her listening in noise has been more successful.

But for those of us whose level of severity of the sensorineural hair cell loss is not too far gone, we are still able to organize the multitudes of information for our brain’s cognitive function as long as we get additional help from the hearing aid technology. That is why in the podcast, Don Schum was emphasizing that the information provided to the auditory system not only should be as complete as possible, but also as CLEAN as possible. And so I think while the OPN does not do the noise sorting job that the brain’s cognitive function does, it does do the job of cleaning up the noise from the speech as much as possible, while still presenting the noise information (be it competing speeches or ambient or diffused noise) to the auditory system. This cleaning is the extra help that we get from the OPN to offset our (not too long gone) sensorineural hearing loss, so that we have all the information available for the auditory system, and this information is “clean” enough to help our brain cognitive function do the sorting.

1 Like

More podcast summaries from those I’ve listened to:

1. What to expect from automatic directionality.

Differentiation between adaptive and automatic directionality. Adaptive directionality refers to the shape of the polar plot (of the microphone). Here, the directional pattern of the mic can change shape. The way this system works is that it tries to minimize the responsiveness of the mic to the direction of the sounds that seem to be dominant, coming in from the back or the sides. The automatic directionality refers to the switching mechanism that will determine whether the mic is in omni or in one of the other directional modes. Different companies have different switching mechanism.

The Oticon switching mechanism looks at the different directional modes (omni or full or split directional) and selects the directional mode that gives the best signal to noise ratio (SNR) in that listening environment. Oticon’s automatic directional system, especially its multi-band adaptive system has an AIDI result of 4.5-5 dB SNR improvement in a diffused noise situation. But actual measured patient performance is more around 2-4 dB SNR improvement. A 3dB SNR improvement is noticeable in the diffused noise environment IF the noise level is not too high or not too low, but somewhere in between. In a quiet room, the SNR is already high enough that 3dB more is not necessary. In a very noisy place, a 3dB improvement is just a drop in the bucket and no help at all.

It doesn’t mean that just because there’s noise in the environment that the directional system are automatically going to be effective. There are other factors beyond the level of the noise in the environment that are going to affect the performance of the directional system. Studies have shown that there are 3 important criteria for directional system to work well for the patient:

a. The speaker needs to be in front and not too far from the patient (within about 6 feet).
b. The noise has to be in the back or from the sides.
c. And most importantly, there has to be NOT a lot of reverberation in the environment.

A Walter Reed study shows that directionality is preferred only 31% of the times in different environments. Omni is preferred 41% of the times, and no preference in 28% of the times.

If the patient has a better understanding of when directionality works well and when it doesn’t work well, and know how to set themselves up in certain configuration to maximize the effectiveness of directionality, then the patient will be more successful with its use and less disappointed if they have more realistic expectation about it.

2. What to expect from noise reduction

Speech tends to have high modulation (amplitude variation) and noise tends to have low modulation, so many hearing aid systems on the market use this distinction to determine what is speech and what is noise. Basic noise reduction systems look at individual channels (however many frequency channels a HA has, a few or up to 64 channels) and decides to attenuate the signal in each channel or not based of the modulation level it sees in that channel. The challenge, however, is that there may be both speech and noise in a channel superimposed on each other and the system will have a hard time to know the difference.

In order to address this problem, Oticon develops a second analysis function (called Synchrony Analysis) in order to determine whether or not speech is present in the mixed speech+noise signal. Synchrony analysis looks into a much shorter time window across the high frequency regions and looks for synchronous activities which implies evidence of a strong harmonic structure that is indicative of speech or music. In this case, even though the signal is mostly un-modulated (indicative of a noise signal which should be attenuated), Synchrony analysis says that there’s speech inside this mixed signal of speech+noise, so attenuation is not carried out at full strength like it would if synchronous activities are not detected.

The best way to describe the situations where a noise reduction system is expected to have an effect is where there are higher level noise that’s NOT like speech (like steady state noise such as road traffic, mechanical noises like blowers on AC. But when the competition is somebody else talking, up to several multiple speakers, then noise reduction is not going to be as effective because unless there is a tremendous number of people talking like the roar of a restaurant or cafeteria, there’s no way for the NR system to know which is the desired target speech and which are undesired speeches. So the NR system is going to try to protect all the speeches. That’s where the brain has to come in and do its own filtering.

P.S. (not from the podcast anymore but this is my own commentary) -> I believe the podcast above is pre OPN time and is descriptive of the type of NR Oticon employs before the deployment of the OPN. With the OPN’s OpenSound Navigator processing algorithm, I’m sure Oticon leverages and uses its NR know-how from before, but it also does something new and different that it didn’t do before -> it uses the back facing cardiod mic to create a noise “model” in the Analyze module that is fed into the Balance module and the Noise Removal module to help with NR. This noise model is basically the sounds on the sides and the back of the listener as picked up by the back facing mic.

In the Noise Removal module, it does employ a similar strategy of looking into a very short time interval (10ms) to be able to detect the differences between the omni signal that contains speech(es) and the noise model’s signal, and attenuates the noise model from the overall signal if it sees a difference.

This noise model is a key differentiation in the OPN technology that’s not employed before in the previous Oticon NR technology which was discussed in the podcast above.

2 Likes

Was curious about the Noahlink Wireless so i sent Oticon an email and they replied -

“Thanks for your mail. The Noahlink Wireless will be supported from the next version of Genie 2017.2 which is due for release in mid – November. Kind regards”

Sound good to me.

2 Likes

Thanks @nazlink for this new information! I wonder if Oticon will continue supporting their own proprietary wireless fitting device, namely FittingLINK 3.0 or abandon it like Signia/Siemens did?

I updated the above Signia/Siemens link to add more details about how Connexx Air and ConnexxLink are no longer supported.

That’s a good point you make there, there would be no real point to the FittingLINK 3.0 would there? apart from it being much much smaller and always connected via USB.

1 Like

Noahlink Wireless/Airlink 2 is also connected to your PC via a USB cable.
Noahlink Wireless/Airlink 2 is small (maybe a little over two inches high).

It does connect via USB but for something like a laptop you can’t really leave it in all time can you?

I wonder if it will allow firmware updates because i emailed Oticon a while back and their reason for no firmware updates -

"Thanks for your mail.

The firmware upgrade procedure within our Opn hearing aids had been setup as a wired operation as a safety feature. There is less chance of losing a connection when wired and therefore the risk of corrupting the chip is lessened.

If I can be of any further assistance please don’t hesitate in contacting me."

Oh, I think I see what you mean by leaving tiny FittingLINK 3.0 plugged into your laptop all the time. Seems like not much of an issue whether it is always plugged in. Personally I would prefer having it put away somewhere instead of always sticking out of my laptop waiting for an accident to happen.

Noahlink Wireless/Airlink 2 allows updates to itself. Actually you need for Airlink 2 to take a Firmware update in order to become a Noahlink Wireless device for non GN ReSound hearing aids.

But I think you mean a Firmware update to your Opn hearing Aids. That’s not left to the programming device. That decision is made by Oticon and the capability is programmed into the Fitting software (Genie 2).

Once a product is already released like the FittingLINK 3.0, why would a company abandon support for it? Unless you’re saying that they may stop producing and selling anymore of it, then I can understand. But they can’t stop supporting it (like providing new firmware updates for it, not the HAs but the device itself) because there are thousands of people who already spent the money and bought it.

Is that what Signia/Siemens did, stop selling it? Did they continue to support it?

They could stop supporting it over time like Apple have stopped supporting there iPhone 5 not long ago.

Support, abandon, yada, yada, yada.

You can still buy ConnexxLink. That is, if you’re an audiologist (or lucky). But you cannot use it to program hearing aids from the new Nx platform. Though, you can still program other older hearing aids with ConnexxLink.

If other manufacturers follow suit then we are witnessing the beginning of the end for individual manufacturers proprietary wireless fitting devices.

For those interested in Oticon’s series of courses by Don Schum on fitting from the perspective of an audiologist he has released part 2 of his Course on “Complex VS Standard Fittings”. This one is also an hour long and he gives some insight into how to best fit three different types of hearing loss. I found it to be quite interesting and it has given me more insight into what the thinking process is of an audiologist when they have to deal with different types of hearing loss in a given patient.

The course is really aimed at audiologists in general and is not actually specific to Oticon’s products so it should be useful to anyone, regardless of what brand of hearing aid they might have.

Here is the link once again to the series of courses:

2 Likes