Does SpeechRescue make sense for only one ear?

Recent audiogram suggests inaudibility in mid frequencies for right ear. I have the Oticon Real aids, and there’s some perceptible difference in “volume” between left and right aids. Would the right ear benefit from SpeechRescue and can it be applied only to that side (or is it applied to both sides)?

There’s another setting “Spacial Noise Management”, is this something to consider turning off for my loss?

I presume this attenuates the volume for the ear with the lower signal-to-noise ratio. My right ear has dead frequency spots and speech comprehension is a lot worse than the left ear. So if there’s someone talking on the right ear side, the left ear output will be attenuated with this setting on? And since I rely on the left ear to understand speech, perhaps I should ask my audi to leave this setting off?

Also will this completely disable Spacial Balancer or is there more to it than just the noise management?

As can see below, the lowest destination configuration for Speech Rescue is 1.6 to 2.4 KHz. The loss in your right ear already took a dive right at 1.5 KHz and stays flat until 4 KHz and goes back up. So even for your right ear, it doesn’t make sense to use Speech Rescue because your audibility in the lowest destination region isn’t any better than the subsequent higher regions.

So in general, yes, Speech Rescue does make sense even for just one ear. It’s just that neither of your ears’ hearing losses make sense to use Speech Rescue on.

image

If you look at the Figure 3 below in Oticon’s 2013 paper on Spatial Sound management, ONLY the noise is attenuated on the ear with the poorer signal-to-noise ratio, not the speech signal. So even if this happens to be on your better ear, it’s still helpful to enable Spatial Noise Management to improve the signal to noise ratio of the speech on your better ear. I really can’t think of a situation where you would want to turn Spatial Noise Management off, but if they give people the option to turn it off, maybe there’s a reason. The only reason I can see is that it would cause an imbalance in the noise suppression which people may not like if they want to be able to hear the noise for some reason. Noise to somebody may be valuable audio information for others, I guess. The second screenshot below mentions how the extent of the spatial noise management is influenced by your personal profile selections. I guess the option simply gives the user a way to just turn it off altogether if they want.

image

As for the Spatial Balancer feature you mentioned, I don’t really see such feature named called out. But the whitepaper mentioned Spatial Sound Premium and Spatial Sound Advanced, and the screenshot below says that the Binaural Processing is part of these 2 features. I assume that your Spatial Balancer is basically the Binaural Processing. If they don’t give you an option to turn it off, then I guess it’s always enabled, and is not affected if you turn off Spatial Noise Management.

1 Like

Frequency lowering would because benefit to you.

I am not familiar enough with Oticons programing software to say how each ear is adjusted.

Think about localization or directionality with hearing. Your brain is very keen to hearing on each side. So yes, frequency lowering would help you.

Has your user name been hacked by an AI chatbot?

The audiogram (being reverse sloped in the mid and high frequencies) is particularly unsuited to frequency lowering.

Also, localisation/directionality is a function of the brain and stereo signalling with preserved time delays between the ears; how does reducing sounds in the better part of the hearing chart and boosting them into lower frequencies improve directionality?

The OP’s right ear might have dead spots that could be helped with frequency lowering. At least that was my take on it. Just trying to help.

It is not my first rodeo but I do understand we hear with our brains. No need to be the way you were about that. I have respected you for many years. You have helped me and many others with your expert advice.

Sorry: I genuinely couldn’t make out what you were saying - it looked like your response/account had been hacked, with lots of relevant words, just not making your normal coherent argument.

Honestly; these losses don’t benefit from futzing around by shifting bits of speech to lower parts of the audiogram. They likely have resolution issues anyway (possibly with the dead-spots as you mention too), so dumping a chunk of HF ‘noise’ over the top of the of the basic mid range signalling is just going to confuse things. The HF is severe, but usable, probably best to leave the sound where it should be.

Binaural hearing (with a degree of directionality) will be key to a bit better speech understanding but a remote mic (possibly via IPhone) would be a game changer in background noise IMHO.

Agreed.
It doesn’t make since to me either at this time. Dyslexia messes me up sometimes if I don’t proofread my writing. That and things are crazy busy here at the house now.

Thanks for the explanation about using the few high frequencies the OP still has over frequency lowering.

1 Like