Hearing Providers attitude towards frequency lowering

Yes, any kind of frequency lowering would be distortion in the strictest sense of the semantic because you’re adding coloration to the original content.

So if the word distortion is being observed in its strictest sense here, then I would want to rephrase it as “unpleasant musical aberration” instead.

If you use compression to lower the frequency, the whole spectrum of the range is compressed, so with everything squished in, the aberration may become more noticeable.

If you minimize the compression and copy a chunk of the higher range and transpose it to a lower range, BUT leaving the original higher range intact and amplified just as it would be like before, then you’re just adding on top and not altering what was there before. If what you’re adding on top doesn’t disturb the harmonics as badly as squishing everything does, the whole musical sound may be still acceptable, especially if you can control the volume amount of what you’re adding on top.

For purists, if you listen to music only, of course you don’t want to add anything to it. But if you’re in a mixed environment where you need that frequency lowering for better speech intelligibility, and there’s also background music going on, or if you’re watching a movie with both speech and music combined, if you don’t find the musical aberration to be unacceptable, then it’d be a good compromise to have in that mixed environment just so you can also hear the dialogs in the movies better.

That was my intention originally when I added Speech Rescue to my default program, and I didn’t have it in my music program. But then I noticed that I didn’t mind how the music sounds very much when I watched movies. And because I’m not a purist when it comes to music, and would rather be able to hear some of the highs in music (that I have lost entirely due to my severe high frequency loss) instead of hearing duller music, so I added Speech Rescue to my Music program as well and I’ve been happy with it.

Thanks for responding in detail, Volusiano.
My point is that, rather than “adding coloration to the original content”, frequency lowering would seem to actually change the original content so that, for example, the pitch of a note at 11 kHz, is lowered to, say, 8 kHz, and so necessarily becomes a different note, and hence introduces distortion. Not so?

Yes, I understand your point. But you don’t lose the original note and you can fully hear the original note just the same as before, so the pitch of the note is never lost or altered and becomes a different note like you think.

Now how about the copy of that note being lowered? Does it have the same pitch or not? That’s not clear, but remember that the highs that get transposed that are being lowered are mostly timbres and high end harmonics and not pure notes, so if they are not just pure tones, and you still hear the pure tones loud and clear, then your musical perception is still intact and not greatly altered. You end up hearing the high end timbres and harmonics that you would miss anyway. The question is whether the high end timbres and harmonics blend well with the original musical content or not. To me, they seem to blend well.

I’m including below a screenshot of the Speech Rescue whitepaper that talks about how they use the ERB (width of the cochlear bandpass filters) to make frequency selection to follow the natural perceptual arrangement to minimize distortion. I know that’s a lot of mumbo-jumbo that I don’t understand myself. But intuitively, I interpret it to mean that they’re using some kind of knowledge about how the cochlear works to make the lowered sounds blend in well with the original sounds.

image

Very helpful explanation and just what I was hoping for! This makes a lot of sense and elucidates the process more clearly than anything I’ve so far come across. Thanks!

I use speech rescue on Oticon and found it extremely helpful for speech recognition due to my high frequency loss

1 Like

I guess one reason for providers not promoting frequency lowering so much might be the fact that this is one feature that really noticably differs among different manufacturers. It’s hard to understand and remember the different concepts and it’s peculiarities and even harder to grasp how it will sound like in reality in different situations. For me, having a steep ski slope loss similar to MDB’s, frequency lowering is a key. I have tried several brands and I must say that the fitter was challenged. For some brands, the proposed setting was really useless. My current choice on the other hand works wonders for me.
In fact, the functionality and performance of frequency lowering is the decisive factor for me to choose my preferred brand of HA’s as it really makes a huge difference to me. All other features seem more similar to me looking at the different brands. Especially when using open ear mold where many of the sophisticated feature do not show there full potential anyhow.
My advice/whish to hearing providers: Try the different frequency lowering methods yourself. I guess this is not easy with good hearing, still I believe with suitable methods, a lot can be learned und understood through self-testing to at least get a feeling about fundamental differences of the different methods.

1 Like

I did turn the Audibility Extender on a couple programs last night. I had it turned way down volume wise, but it was still present but good for trying it out.
This is going to sound weird, but for some reason it was still bleeding into the other programs I didn’t turn it on for. I have the Widex Evokes. It was on every program even though I only turned it on for two of them, so I turned it off. The last time I tried it, it was on the default Universal program and didn’t bleed over like that to the others. There was a recent update to the Compass software. I’m wondering if there’s a software bug that turns it on to some degree everywhere under some conditions. I might try it again on some different programs. I’d like to use it, but not if if has to be on every program.

Don’t know much about Widex’s frequency lowering other than it is different from most of the others. On audiology online, there’s a 5 hour master class on frequency lowering and I’m pretty sure there’s a section that deals only with Widex.

Yes, different manufacturers use different ways to do it. I had done some research a while back on it. They keep calling it compression for some reason, though, which threw me off at first. I know they mean they’re compressing the sound into a smaller spectrum, but compression has a totally different meaning with regards to audio leveling. Compressors don’t move frequencies. They adjust the levels of them. It’s confusing to use a universal
term that anyone who’s worked with audio knows and give it a totally different meaning.
Anyway, knowing that they use a slightly different jargon may help others who look into this.

1 Like

Widex does not use frequency compression. (Phonak, Signia and Resound do) Widex’s method is dynamic transposition. I have no experience with it other than reading, but it’s supposed to lower frequencies on an octave basis to keep things “in tune.” Regarding “compression.” I think it was a mistake to use the term frequency “compression,” as it’s confusing since compression is a frequently used term in hearing aids. Hopefully for clarification: “Compression” with regards to hearing aids compresses the dynamic range (different amounts of gain are added depending on the loudness of the sound) Frequency compression, compresses the frequencies and then moves them to lower frequencies.

3 Likes

Phonak uses Frequency Compression which ‘does’ move the frequencies. Your observation is correct in other respects though.

They’ve done it for years to manage feedback too.

There’s an adjustable lower fixed point and above this you basically ‘accordion squeeze’ the overlapping channels into a narrower final output frequency range.

1 Like

Ok, so I finally got is working to where it does let me hear frequencies I couldn’t before and left one program for music that doesn’t use it at all.
It took a lot of tweaking and several different tries to get it to where it was useful and not distracting and annoying. Going back to the original subject of this forum, I can understand why audiologists are reluctant to use this. It would take very long sessions or multiple sessions to get it right and there’s the communication issue where the patient knows what they mean, but the audiologist is having a hard time figuring that out from the terms and descriptions the patient is using. That’s hard enough for getting a basic fitting right, let alone a feature like this.

I agree that the semantic use the term “compression” for both gain (volume level) compression (such as WDRC Wide Dynamic Range Compression) and frequency compression (for frequency lowering) can easily get people confused. However, it is still the technically correct and accurate use of the word, and that’s probably why they still use it nevertheless, because the technical experts don’t get easily confused like the normal people over that word within the right contexts.

Below are a couple of slides on the various frequency lowering technologies offered by the HA mfgs, taken from that Audiology Online course that MDB mentioned, and some specifics on frequency transposition. However, I think the second slide may give the incorrect impression that the destination region of the transposed areas replace the original content of that area, at least so it seems for the case of Oticon. But in reality, the original content of the destination area is fully preserved, and the transposed area is only layered/added on top of the original content.

The other misleading thing about this graph is that the inaudible area seems to be gone after the transposition. At least for Oticon, I know that you have an option to preserve the amplified inaudible region (in case it’s still audible to you, although not as well) and just make a copy of it and transpose it and add to the top of the audible region.


3 Likes

I fully agree with this. It seems like something the DIY crowd may have a better success rate with because they can experiment and adjust things on the fly to converge to a satisfactory outcome MUCH MORE quickly, something that may take a dozen visits or more between a patient and their HCP to achieve, assuming that they have a very good communication rapport with each other in the first place.

Even then, what the patient hears and describes would probably lose a lot in translation by the time the HCP interprets what they understand from the patient. I can easily see what may take 5 minutes for a DIY person to try out can require half a dozen trips over the time span of a few months for a patient and an HCP to collaborate on.

1 Like

I have mixed feelings about this. I get that audiologists are time stressed and fitting frequency lowering isn’t simple, but if I ask a medical professional about something, I don’t expect to get blown off with “it doesn’t work” or “I don’t use it much.” People who do use it have developed ways to fit it fairly efficiently. Perhaps there need to be specialists that people can get referred to. Just seems odd that there’s this feature that hearing aid manufactureres think is important enough to include, often turning it on by default depending on audiogram and that there’s a plethora of educational material on how to use it and many (most?) choose to ignore it. I guess it’s not surprising when one considers how few use Real Ear Measurement.

This is exactly correct. The physics and neuroscience of it are complex and amazing. You don’t perceive it, in the context of the music.

Read this book to catch a glimpse of the miracle:

https://www.nwcbooks.com/download/music-the-brain-and-ecstasy/

I have to admit that when adjusted correctly for my hearing profile and at a lower volume, the frequency transposition used by Widex isn’t that bad on background music and music in general because it does transpose by octaves. There is still a very slight time delay for the transposed sounds that is good enough that it wouldn’t be noticeable or bother most people.
Musicians are quite a bit more sensitive to hearing delays and the type of music with faster attacks like hits or plucks are more noticeable than slower attacks (like bowing or horns), so it’s a good idea to leave one program for music without the transposition turned on if or when this is an issue if your HAs support multiple programs.

First of all, let me say I am using Oticon More 1 R aids with miniRite 85 and open dome. The prescription on the fitting software says I should be using miniRite 100 with power domes (closed).

I also use DSL v5 on all my Programs after some years of comparing all the other fitting formulas.

I set my aids to use the MyMusic setting on one of my programs with the speech rescue turned to ON to try @Volusiano’s conjecture that it doesn’t seem to affect the music that noticeably.

I agree with @Volusiano as far as my hearing will allow me! But, I bow to the opinions of those more musically talented and especially with less hearing loss than me.

I looked at both White Papers covering Speech Rescue and the MYMusic Program development.

In the Speech Rescue White Paper they said patients may extract additional information from the high frequency music:

In the Oticon MyMusic Program White Paper they said Speech Rescue is OFF by default because it might cause distortion from moving sounds to different frequencies:

I can’t detect any distortion, but with my hearing, that is not saying a lot!

I notice that you add the word “additional” as shown in bold above when the actual quote doesn’t have that word. I just want to clarify that there’s no “additional” information to be extracted per se. The whitepaper only means that the full output bandwidth can be fully retained in case the patient can still hear all amplified sound in the source region, meaning that the source region does not get compressed like in the case with the frequency compression type lowering where you would lose the original sound in the source region due to the frequency compression. So in the case of Speech Rescue, you can have both, the original amplified sound as is, and the additional lowered sound.

In terms of the MyMusic whitepaper saying that Speech Rescue is turned off in MyMusic to avoid distortion, there’s nothing wrong with that because it’s the standard approach for music program. But like I said before, if without Speech Rescue, your hearing loss simply still can’t hear any of the amplified high sounds because the loss in the high frequencies is so severe that you would not be able to hear any of the high sounds anyway even when amplified, then you gotta decide for yourself whether you’d rather just can’t hear any of the lowered sound to avoid “distortion”, or whether you’d rather hear the lowered “distorted” sound so that at least you’d be hearing something from the highs. For people whom high frequency hearing loss is only moderate to severe or better, then they can do OK with the amplified sounds in the highs. But for people with severe to profound hearing loss in the highs like mine, it’s long gone and lost and no amount of amplification can restore it, except to lower it. That is, unless you go to the high power HAs, then maybe you can.

1 Like

@Volusiano thank you for your expert advice as always!

I am now at the stage where I really not only struggle to hear speech but to actually comprehend and understand what has been said.

But, the MyMusic program has enlightened my day with wonderful streamed music as I hadn’t previously considered it. Thank you for alerting me to this program.

Cheers!

1 Like