More podcast summaries from those I’ve listened to:
1. What to expect from automatic directionality.
Differentiation between adaptive and automatic directionality. Adaptive directionality refers to the shape of the polar plot (of the microphone). Here, the directional pattern of the mic can change shape. The way this system works is that it tries to minimize the responsiveness of the mic to the direction of the sounds that seem to be dominant, coming in from the back or the sides. The automatic directionality refers to the switching mechanism that will determine whether the mic is in omni or in one of the other directional modes. Different companies have different switching mechanism.
The Oticon switching mechanism looks at the different directional modes (omni or full or split directional) and selects the directional mode that gives the best signal to noise ratio (SNR) in that listening environment. Oticon’s automatic directional system, especially its multi-band adaptive system has an AIDI result of 4.5-5 dB SNR improvement in a diffused noise situation. But actual measured patient performance is more around 2-4 dB SNR improvement. A 3dB SNR improvement is noticeable in the diffused noise environment IF the noise level is not too high or not too low, but somewhere in between. In a quiet room, the SNR is already high enough that 3dB more is not necessary. In a very noisy place, a 3dB improvement is just a drop in the bucket and no help at all.
It doesn’t mean that just because there’s noise in the environment that the directional system are automatically going to be effective. There are other factors beyond the level of the noise in the environment that are going to affect the performance of the directional system. Studies have shown that there are 3 important criteria for directional system to work well for the patient:
a. The speaker needs to be in front and not too far from the patient (within about 6 feet).
b. The noise has to be in the back or from the sides.
c. And most importantly, there has to be NOT a lot of reverberation in the environment.
A Walter Reed study shows that directionality is preferred only 31% of the times in different environments. Omni is preferred 41% of the times, and no preference in 28% of the times.
If the patient has a better understanding of when directionality works well and when it doesn’t work well, and know how to set themselves up in certain configuration to maximize the effectiveness of directionality, then the patient will be more successful with its use and less disappointed if they have more realistic expectation about it.
2. What to expect from noise reduction
Speech tends to have high modulation (amplitude variation) and noise tends to have low modulation, so many hearing aid systems on the market use this distinction to determine what is speech and what is noise. Basic noise reduction systems look at individual channels (however many frequency channels a HA has, a few or up to 64 channels) and decides to attenuate the signal in each channel or not based of the modulation level it sees in that channel. The challenge, however, is that there may be both speech and noise in a channel superimposed on each other and the system will have a hard time to know the difference.
In order to address this problem, Oticon develops a second analysis function (called Synchrony Analysis) in order to determine whether or not speech is present in the mixed speech+noise signal. Synchrony analysis looks into a much shorter time window across the high frequency regions and looks for synchronous activities which implies evidence of a strong harmonic structure that is indicative of speech or music. In this case, even though the signal is mostly un-modulated (indicative of a noise signal which should be attenuated), Synchrony analysis says that there’s speech inside this mixed signal of speech+noise, so attenuation is not carried out at full strength like it would if synchronous activities are not detected.
The best way to describe the situations where a noise reduction system is expected to have an effect is where there are higher level noise that’s NOT like speech (like steady state noise such as road traffic, mechanical noises like blowers on AC. But when the competition is somebody else talking, up to several multiple speakers, then noise reduction is not going to be as effective because unless there is a tremendous number of people talking like the roar of a restaurant or cafeteria, there’s no way for the NR system to know which is the desired target speech and which are undesired speeches. So the NR system is going to try to protect all the speeches. That’s where the brain has to come in and do its own filtering.
P.S. (not from the podcast anymore but this is my own commentary) -> I believe the podcast above is pre OPN time and is descriptive of the type of NR Oticon employs before the deployment of the OPN. With the OPN’s OpenSound Navigator processing algorithm, I’m sure Oticon leverages and uses its NR know-how from before, but it also does something new and different that it didn’t do before -> it uses the back facing cardiod mic to create a noise “model” in the Analyze module that is fed into the Balance module and the Noise Removal module to help with NR. This noise model is basically the sounds on the sides and the back of the listener as picked up by the back facing mic.
In the Noise Removal module, it does employ a similar strategy of looking into a very short time interval (10ms) to be able to detect the differences between the omni signal that contains speech(es) and the noise model’s signal, and attenuates the noise model from the overall signal if it sees a difference.
This noise model is a key differentiation in the OPN technology that’s not employed before in the previous Oticon NR technology which was discussed in the podcast above.