Hearing Providers attitude towards frequency lowering

@Volusiano, actually the VA didn’t mind, it was the Oticon support rep that disaproving of my efforts.

Do you have any thoughts as to how I should approach the programming of speech rescue, based on my loss?
I may just give a try, now that you’ve given me an incentive.

See, that’s what I mean, streaming is good.
Oh oh, we’re taking this thread in a new direction.
We may be in trouble w/ the op

If I had your hearing loss, I would want to use one of the lowest configurations toward the left, like the 2.4 configuration that has the 1.7 - 2.4 KHz range because that’s where you still have the better hearing. I’d leave the High Frequency Bands to ON so the original amplification is left enable just like if you didn’t have any frequency lowering. This way you only get the lowered sounds added on top. I’d start out with either the default strength (or milder strength) first and adjust from there.

If you want to do on the fly A/B comparison, and you have enough programs available, you can have the default program without it, and a second similar program with it turned on.

Thanks, I copied that over to my to-do for the More list.

Since there has been a recurring complain that frequency lowering seems to distort music grossly by many posters who shared on this thread (perhaps as a reason why HCPs don’t like to advocate frequency lowering as much), I think it’d be still on-topic as long as we stick to whether Oticon users notice the same music distortion using the Oticon Speech Rescue technology or not.

Also, Speech Rescue can selectively be enabled in one program and not the others, so it’s easy to have it both ways rather than having to choose either/or. And it’s also easy to do A/B comparison on the fly.

2 Likes

I must also add that my HCP for my Oticon OPN never offered or mention using frequency lowering when she fitted myself with the OPN 1 either, despite me being a very good candidate for it. Since I’m a DIY, I just added it myself later on. I didn’t bother bringing up the topic with her because she frowns on DIY’ers.

1 Like

I can’t see why there wouldn’t be music distortion. Since an intrinsic element of musical notes is frequency, when that frequency is lowered, there must necessarily be an alteration of pitch and hence distortion. Surely?
Genuinely looking for enlightenment here!
(I put the same question to my audiologist (in discussing Phonak’s Sound Recover) and found him at a bit of a loss.)

Yes, any kind of frequency lowering would be distortion in the strictest sense of the semantic because you’re adding coloration to the original content.

So if the word distortion is being observed in its strictest sense here, then I would want to rephrase it as “unpleasant musical aberration” instead.

If you use compression to lower the frequency, the whole spectrum of the range is compressed, so with everything squished in, the aberration may become more noticeable.

If you minimize the compression and copy a chunk of the higher range and transpose it to a lower range, BUT leaving the original higher range intact and amplified just as it would be like before, then you’re just adding on top and not altering what was there before. If what you’re adding on top doesn’t disturb the harmonics as badly as squishing everything does, the whole musical sound may be still acceptable, especially if you can control the volume amount of what you’re adding on top.

For purists, if you listen to music only, of course you don’t want to add anything to it. But if you’re in a mixed environment where you need that frequency lowering for better speech intelligibility, and there’s also background music going on, or if you’re watching a movie with both speech and music combined, if you don’t find the musical aberration to be unacceptable, then it’d be a good compromise to have in that mixed environment just so you can also hear the dialogs in the movies better.

That was my intention originally when I added Speech Rescue to my default program, and I didn’t have it in my music program. But then I noticed that I didn’t mind how the music sounds very much when I watched movies. And because I’m not a purist when it comes to music, and would rather be able to hear some of the highs in music (that I have lost entirely due to my severe high frequency loss) instead of hearing duller music, so I added Speech Rescue to my Music program as well and I’ve been happy with it.

Thanks for responding in detail, Volusiano.
My point is that, rather than “adding coloration to the original content”, frequency lowering would seem to actually change the original content so that, for example, the pitch of a note at 11 kHz, is lowered to, say, 8 kHz, and so necessarily becomes a different note, and hence introduces distortion. Not so?

Yes, I understand your point. But you don’t lose the original note and you can fully hear the original note just the same as before, so the pitch of the note is never lost or altered and becomes a different note like you think.

Now how about the copy of that note being lowered? Does it have the same pitch or not? That’s not clear, but remember that the highs that get transposed that are being lowered are mostly timbres and high end harmonics and not pure notes, so if they are not just pure tones, and you still hear the pure tones loud and clear, then your musical perception is still intact and not greatly altered. You end up hearing the high end timbres and harmonics that you would miss anyway. The question is whether the high end timbres and harmonics blend well with the original musical content or not. To me, they seem to blend well.

I’m including below a screenshot of the Speech Rescue whitepaper that talks about how they use the ERB (width of the cochlear bandpass filters) to make frequency selection to follow the natural perceptual arrangement to minimize distortion. I know that’s a lot of mumbo-jumbo that I don’t understand myself. But intuitively, I interpret it to mean that they’re using some kind of knowledge about how the cochlear works to make the lowered sounds blend in well with the original sounds.

image

Very helpful explanation and just what I was hoping for! This makes a lot of sense and elucidates the process more clearly than anything I’ve so far come across. Thanks!

I use speech rescue on Oticon and found it extremely helpful for speech recognition due to my high frequency loss

1 Like

I guess one reason for providers not promoting frequency lowering so much might be the fact that this is one feature that really noticably differs among different manufacturers. It’s hard to understand and remember the different concepts and it’s peculiarities and even harder to grasp how it will sound like in reality in different situations. For me, having a steep ski slope loss similar to MDB’s, frequency lowering is a key. I have tried several brands and I must say that the fitter was challenged. For some brands, the proposed setting was really useless. My current choice on the other hand works wonders for me.
In fact, the functionality and performance of frequency lowering is the decisive factor for me to choose my preferred brand of HA’s as it really makes a huge difference to me. All other features seem more similar to me looking at the different brands. Especially when using open ear mold where many of the sophisticated feature do not show there full potential anyhow.
My advice/whish to hearing providers: Try the different frequency lowering methods yourself. I guess this is not easy with good hearing, still I believe with suitable methods, a lot can be learned und understood through self-testing to at least get a feeling about fundamental differences of the different methods.

1 Like

I did turn the Audibility Extender on a couple programs last night. I had it turned way down volume wise, but it was still present but good for trying it out.
This is going to sound weird, but for some reason it was still bleeding into the other programs I didn’t turn it on for. I have the Widex Evokes. It was on every program even though I only turned it on for two of them, so I turned it off. The last time I tried it, it was on the default Universal program and didn’t bleed over like that to the others. There was a recent update to the Compass software. I’m wondering if there’s a software bug that turns it on to some degree everywhere under some conditions. I might try it again on some different programs. I’d like to use it, but not if if has to be on every program.

Don’t know much about Widex’s frequency lowering other than it is different from most of the others. On audiology online, there’s a 5 hour master class on frequency lowering and I’m pretty sure there’s a section that deals only with Widex.

Yes, different manufacturers use different ways to do it. I had done some research a while back on it. They keep calling it compression for some reason, though, which threw me off at first. I know they mean they’re compressing the sound into a smaller spectrum, but compression has a totally different meaning with regards to audio leveling. Compressors don’t move frequencies. They adjust the levels of them. It’s confusing to use a universal
term that anyone who’s worked with audio knows and give it a totally different meaning.
Anyway, knowing that they use a slightly different jargon may help others who look into this.

1 Like

Widex does not use frequency compression. (Phonak, Signia and Resound do) Widex’s method is dynamic transposition. I have no experience with it other than reading, but it’s supposed to lower frequencies on an octave basis to keep things “in tune.” Regarding “compression.” I think it was a mistake to use the term frequency “compression,” as it’s confusing since compression is a frequently used term in hearing aids. Hopefully for clarification: “Compression” with regards to hearing aids compresses the dynamic range (different amounts of gain are added depending on the loudness of the sound) Frequency compression, compresses the frequencies and then moves them to lower frequencies.

3 Likes

Phonak uses Frequency Compression which ‘does’ move the frequencies. Your observation is correct in other respects though.

They’ve done it for years to manage feedback too.

There’s an adjustable lower fixed point and above this you basically ‘accordion squeeze’ the overlapping channels into a narrower final output frequency range.

1 Like

Ok, so I finally got is working to where it does let me hear frequencies I couldn’t before and left one program for music that doesn’t use it at all.
It took a lot of tweaking and several different tries to get it to where it was useful and not distracting and annoying. Going back to the original subject of this forum, I can understand why audiologists are reluctant to use this. It would take very long sessions or multiple sessions to get it right and there’s the communication issue where the patient knows what they mean, but the audiologist is having a hard time figuring that out from the terms and descriptions the patient is using. That’s hard enough for getting a basic fitting right, let alone a feature like this.

I agree that the semantic use the term “compression” for both gain (volume level) compression (such as WDRC Wide Dynamic Range Compression) and frequency compression (for frequency lowering) can easily get people confused. However, it is still the technically correct and accurate use of the word, and that’s probably why they still use it nevertheless, because the technical experts don’t get easily confused like the normal people over that word within the right contexts.

Below are a couple of slides on the various frequency lowering technologies offered by the HA mfgs, taken from that Audiology Online course that MDB mentioned, and some specifics on frequency transposition. However, I think the second slide may give the incorrect impression that the destination region of the transposed areas replace the original content of that area, at least so it seems for the case of Oticon. But in reality, the original content of the destination area is fully preserved, and the transposed area is only layered/added on top of the original content.

The other misleading thing about this graph is that the inaudible area seems to be gone after the transposition. At least for Oticon, I know that you have an option to preserve the amplified inaudible region (in case it’s still audible to you, although not as well) and just make a copy of it and transpose it and add to the top of the audible region.


3 Likes