Can someone please explain how voices work?


I have mild-to-moderate cookie bite hearing loss, and I’m hearing pretty well with my HAs. There are some people, though, that I just cannot understand. It’s not how loud they are, as they sound perfectly loud. And it doesn’t seem to be how high or low their voices are. It seems almost as if it’s people with a sort of flattened affect to how they talk. They’ll say an entire sentence, and it just sounds like one long mumble. I’ll ask them to repeat themselves, and I still won’t understand a single word. Everyone else, I can understand most words.

I was told that different letters map to different frequencies. Is this true? Is this independent of how high or low someone’s voice is? Does anyone know, is there something fairy simple that I can read that would explain how voices work?

I’m wondering if it is worth trying to adjust the aids to these few people’s voices. Is it plausible to try to record a voice and take it to the audiologist? Or are there just some voices that are hard to understand?

My audiogram:
250 R30 L35
500 R30 L45
1K R45 L45
2K R50 L50
3K R45 L45
4K R35 L35
6K R20 L25
8K R15 L20


If you Google “speech banana” and then check out the “images” for the search, you will see bunches of audiograms where they place not only specific letter frequencies, but also common sounds, into their “typical” frequency and decibel ranges.

About the muddled speech, I sometimes have that issue, as well, with or without hearing aids. Like you, it won’t necessarily have to do with it being a high or low frequency voice, although, in general, men are harder for me to understand. (I have a cookie-bite loss, too.) But, sometimes, I will have trouble understanding a female or child, as well, while others I understand quite well.

When I researched this issue, I found information online about how, for cookie-biters, it is best not too over-amplify 500 and lower in the frequency range, even if there is a loss there. (I’m taking that to mean that, yes, go ahead and amplify them a little, but not very much.) Apparently, 500 and below can create environmental sounds that will muddle the sound of speech. I just asked my audiologist about this very issue, today, and she agreed that was probably happening. So I am having the 1K to 4K boosted a little (even though I have good high frequency hearing). I am going to use that as a “Speech” program while still keeping my current program which sounds great for the general environment and most speakers. I’ll use the new “speech” program with the amplified 1K and up for speakers who sound muddled to me.

I also want to say that when I initially received my HAs, the sound was definitely on and off muddled for certain speech from certain speakers. When the audiologist reduced the amplification on the lower frequencies, it really cleaned up the speech sounds. (So my newest “speech” program is just even more clarified.)

I don’t know if this will help, but there it is! I am having success with it, so far. Good luck to you!

Good post.

I help with a deaf’ mute and the audiologist said about the same as you found. Program out the cookie bit cut curve by clipping the lower end.

Thanks for the info! I looked up the speech banana. The thing I still don’t understand, though, is how that map relates to how people actually speak. Like, is an “s” an “s” no matter if it’s a man with a low voice or a small child with a high voice?

The speech banana just depicts “typical” ranges for specific letter sounds. The whole gray banana shape is there to show the wider range of those sounds.

I would hazard a guess, based on what I know about singing, that the consonants don’t vary so much from their ranges, but the vowels can vary quite a lot, from low to high. When singers pitch their voices, it’s primary with the vowels. Just guessing, though. However, this would explain why different people vary so much in their speaking pitches. There are monotone speakers (same volume and pitch) and melodic speakers who pitch their voices up and down quite a lot (so both the volume and the pitch changes more). If you are trying to listen to a soft, monotone speaker who primarily speaks to you in the mid-frequency ranges and you have a mid-frequency loss, then you would have the most problem with them… even if certain sounds pop into the high frequency ranges.

As for myself, I have an easier time understanding people who do NOT speak in a monotone style. I need to hear that wider range of pitch and volume to piece together the comprehension. Besides that, more colorful speakers have a lot more to work with, speechreading-wise. You may not realize just how much you rely on speechreading and looking at the whole big picture of the speaker-- context, mouth and facial expressions, emotion of the voice, etc. :slight_smile:

I think you should keep in mind that articulation is more likely the key to understanding speech. Many American speakers tend to slur their words that is, they tend to not separate each of the individual sounds that make up a meaningful word.

Ask the undecipherable speakers to slow down and pronounce each syllable carefully. LOL getting them to do this. Ed

Apparently, 400 and below can create environmental sounds that will muddle the sound of speech.

More likely that USM (Upward Spread Of Masking) kicks in.

The boosted low frequencies wash over the inner ear mechanism where they can temporarily disable the operation of the high frequency sensors.

Boosted low frequencies are NOT a Good Thing!

I have post a questions myself on speech/speech percetion…that is something i would like to know myself too…Great questions thank you :slight_smile:
Useful reading your forum

Amy, you’re not for real. You plagiarized my previous post, big time. (Anyone can scroll up and read my first post from which you got most of your words, changing 500 to 400, which makes no sense since they don’t screen for 400). Are you just trying to advertise here?

I was wondering about Amy also. In one post she recommended that someone consult a urologist, and in another, an oncologist. I don’t know about her, but I don’t pee out my ear…

Re:Can hearing aids be re-fitted to another person?

			Hi Towermax, 

Yes,It may be possible that the aids refitted and used for her ears. Take the advice of good urologist as you have said that the cost of operation is nearly about $400. It sounds not…

I reported the suspicious post and I see that it has been deleted by the forum administrators. Maybe all of them have been deleted, I don’t know.

I could not agree more. Since having my lower frequencies (500 and 1K) gain reduced and also reduced were the higher frequencies i.e. above 6K) along with using a solid SAV plug, speech recognition has improved a lot in noise. I had the gain increased in the middle frequencies meaning in the speech banana area, were increased a little. I am only talking about 4 to 6 dB SPL or so. It did help especially hearing speech in noise. The occlusion effect I can live with because the speech recognition is improved so much…in noisy backgrounds. It did not take long for my brain to “correct” how the people sounded to me.

But on the speech banana it looks like there are a lot of letters that are in the “400 and under” pitches. If those frequencies aren’t boosted, won’t those letters be hard to hear?

this is a well know fact

if you program the instruments using speech banana, you can fine tune the gain of the instrument to maximize speech intel…

Here is a web site that might help. I’ve suggested it before and most people seem to think it helps to clear up the questions you have. There’s two (2) parts to it. Hope this helps.

Why I can Understand Some People… and Not Others.

Shi-Ku Chishiki

Nice post, It is really helpful to me.
Your post is very interesting and I teach something new over here. Nice information.
My heartiest thanks for sharing.

if you program the instruments using speech banana, you can fine tune the gain of the instrument to maximize speech intel…

The fitting software does this optimisation for you.

The speech banana is a great way to visualize… thanks for sharing.

When I plotted my audiogram against it, it makes it pretty clear why I have trouble with speech.

Thanks for the link, very helpful.