Is poor word discrimination a physical deficiencies?

I am trying to figure out how much of brain exercising can improve the word discrimination. Is poor word discrimination a physical deficiencies or is it just lack of brain exercising?

I notice that if I listen to a sentence on telephone, I cannot discriminate much of it, however if I read the same sentence first (expecting what the word should be, then listen to the same sentence, I can hear every words clearly. If the word discrimination is just physical ( for example loss of hair cells on cochlear), then why it can be heard clearly after knowing the sentence? If the brain cannot decode words because the nerves cannot pass the sound information accurately, but why after knowing the sentence, the nerves can deliver the sounds correctly?

Would you please share with me your word discrimination experiences. Do you hear it much more clearly after you know the sentence? In your daily basis, do you use captioning more than listening and do you feel any side effect? Do you do any brain exercise for improving the word discrimination? If so, how effective is it?

This is an extremely interesting post. When I use good closed captioning, I feel that I am hearing the words and voice clearly. Often times, if I know the topic of conversation, I experience the same, normal-like hearing. It’s almost as though I need these clues in order to hear. I have had very limited success with top of the line hearing aids. I’ve wondered if my reliance or use of closed captioning has re-wired my brain and is the cause of my continual decline of word recognition scores while having a fairly stable pure tone average. Maybe there is a type of therapy that would help?:cool:

Thank you for sharing your speech discrimination experience with me. Yes this is exactly what I was thinking. Perhaps after relying on captioning for some time , the brain re-wire the task of translating the words to different area of the brain. The section that requires to read first in order to understand it. The longer it relies on captioning, the worse the speech discrimination become.

However I discussed this with my audiologist and he believes otherwise. He thinks that when the nerve is get damaged, it won’t signal the brain and therefore the brain won’t be able to decode and understand it. But when we read a sentence, we actually bypass that specific task of brain. The words are already decoded from reading it and all we do is listening.

Now I am wondering if speech discrimination is a physical deficiency or not. If it is, then we will have very limited capability to improve it, but if it is not, it should be completely reversible by constant brain exercising.

How can we find out which of these concepts are true?

Has anyone in this group been able to improve his/her speech discrimination?

I’ve always wondered what caused two people with the same (or nearly the same) hearing loss to have such different speech discrim scores. I’ve had hearing loss for fifteen years (only aided for eleven) and still wonder how I’m scoring one hundred percent speech discrimination at 55 dBs. I also don’t have any problems “hearing” in situation with lots of background noise…with my aids in, of course. I’ve always wondered if my getting aids relatively quickly helped preserve my discrimination? I don’t know…but I notice a lot of people (my Mom included) have better hearing than I do but still struggle in noisy situations.

I wonder if there’s an actual answer?

You are right, there is definitely not a one to one coordination between hearing loss and speech discrimination although they both starts via damage on hearing system. It is interesting that your brain could filter the noise and discriminate the speech well. In my case I have both poor discrimination and cannot hear in noisy places. However I have great speech discrimination if I look at the person, although I am not doing any lip reading.

Would you please tell me a little more as how often do you use captioning system? What type of hearing loss do you have? IS it sensory loss ? Did your hearing aids had good noise reduction during those 11 years or you just relied on yourself to filter the noise? Where you always in noisy environment? I am wondering if you trained your brain to filter the noise unconsciously.

When my hearing loss was near 55 dB, I had almost %100 speech discrimination. I was using the phone all the time, one day I had to answer a very important call at job and I could not hear it clearly. I was embarrassed and decided to use relay service and captioning thereafter. I gave up on my hearing, thereafter my speech discrimination declined significantly. I cannot discriminate anything on phone now.

By the way my hearing loss was gradual, declining 10 DB every few years since age of 20. This is my audiogram
(both ears almost the same)
Word discrimination %1

I did some research on underlying process related to hearing versus speech recognition. These are the medical explanation of each of the section.It will gives us a little more clues.

Hearing process:
As everyone already knows, the Organ of Corti in cochlea contains sensory cells, each with a little hair capable of picking up vibrations in the cochlear fluid. Different hairs are specialized to detecting sounds at various frequencies, and turn them into nerve signals to be sent to the brain.

Speech discrimination process:
Research in speech perception seeks to understand how human listeners recognize speech sounds and use this information to understand it. After processing the initial auditory signal, speech sounds are further processed to extract acoustic cues and phonetic information. This speech information can then be used for higher-level processes, such as word recognition.
It has been proposed the perceptual normalization process in which listeners filter out the noise, speed of speech, acoustic characteristics of a particular speaker, speed of the sound and understanding of the word has to arrive at the underlying category. Therefore a listener has to adjust his/her perceptual system to the acoustic characteristics of a particular speaker or noise environment.

By reading these medical explanations. What is your opinion? Declining on speech discrimination would be:

  1. Damage of nerve that could not signal the brain?
  2. dysfunction of brain that performs normalization processing?

Again if the answer is 2,then there is hope that the brain exercising would recover it, however if it is 1, then there is not much can be done.

Please share with me your thoughts and perception in these concepts.

I just spoke to a doctor and he gave me an interesting answer that explains some of my previous questions. Anyone who is not born deaf, has memory of sounds. When we read a sentence then listen to it, the brain decode the sounds from memory not from auditory signals.

I think that for many of us, particularly with the HF or mid-freq loss we can hear vowels around which we attempt to construct a meaningful sentence. What I actually hear may be “Id a ey ee or” - in context, or when I have read the phrase from captions, I can easily make it into “interesting thought” as if I had heard all the consonant information too. What I actually translated this into was “Empress in a Box” because I lacked context information and captions.

Because we are always on this constand quest to make semi-heard information into words we actually feel like we hear pretty well in an on-topic conversation with a familiar speaker. What can be interesting is to sit down with someone you know and get them to chat to you all about one thing for a while and then have them suddenly slightly obscure their lip patterns and say something unrelated, like you are talking away about the World Series and they say “And the cats rained down in leaves” and you will not hear it because it’s unrelated and it’s nonsense.

Single word speech discrimination scores that are out of context give us the best information about what is actually heard as opposed to filling in the blanks wheel of fortune style. If someone talks about “The rising um” you are automatically going to change this word to “sun” as it’s unlikely to be “um” or “bum” or “run” it’s going to be “sun”. But alone as a single word you are going to have no clues to go on, and that “um” could be just about anything. You didn’t hear “sun” you filled it in.

My mis-hearings usually have some basis in similar sounds, though my husband as a normal hearing person doesn’t see how these things are similar!! I heard “Off your machine” as “The fuhrer routine” so you can see where I’ve heard some information and filled in the surroundings with real words, no matter how silly those words are!

Yes definitely not hearing vowels makes a big difference on our speech discrimination.

Today I was practicing some words that I have the most difficultly to hear them. I was just keep replaying them over and over and listening to them. I noticed after 10 times of listening, the unheard vowels starts to be heard. If my brain is filling in those based on my past memory, then there should be a way for our brain permanently filling those vowels thereafter whenever we hear that word again. Isn’t it?

Summarizing my understanding, the speech discrimination more likely is a physical deficiency, however based on what part of nerve, hair cell or degree of damage to them, we have different result on our speech discrimination. Although it is a physical deficiencies, but depend on how we could utilize our sound memory, filling the gaps for guessing the word, then we could have a better/worse speech discrimination during conversation.

I always tell people that my hearing would be almost perfect if I had a device that could tell me the topic of conversation. On my I-Phone I have an App called dragon dictation. It basically converts what you say to text. Also on the google search window there is a microphone symbol and if you click it, it will convert what you say into text for the search. Neither of these require you to “train” the software… So, the technology is there to have an App running that simply lists words that it hears from all nearby voices. It doesn’t have to be like closed captioning and it wouldn’t be necessary to form sentences which would take too long. I wish I had a software/App development background so that I could write such an app. I believe it would be a great new tool and assistive device for those with hearing loss. Deaf individuals would not benefit from this though and I believe they are resistant to the development of this based on closed captioning discussions on this board.

There kind of is - this is why we hear people we know better than those we don’t, as we know in exactly what pitch and accent those people are likely to say words we have heard them say before. You cannot transfer the information to unfamiliar sources as they may say them slightly differently.

Hey CryMeARiver what a brilliant idea. I always wished that captioning (specially on the phone) was faster and your idea would definably accomplish it. It would be an ideal that captioning had two level (full or summary) depend on our need and it was available on our iphones. I am software developer (Computer programmer) for 30 years now. The complexity is not the programming part but recognizing or formulating what is the meaning of the sentence for driving the topic of discussion. If it was as simple as picking just the verb of sentence, then it is very easy to do it. Have you tried to pass your idea to “Dragon dictation” or “Captel” company?

RoseRodent you have a good point. Then this means, if we practice hard with the words hoping our brain fill in vowels thereafter may not necessary works because I did not put other factors like accent, tone of sound into my practice. The brain will only learn to fill in the missing vowels based on the sound of voice that we practice. To make this more permanently, then we need to practice with different accents and type of voices and hoping that the brain would adjust itself on various situations.

Wow…It sounds like I need more than this life time to accomplish training my brain to get a good speech discrimination :–(

One of the reasons phones work fairly well w/o technology is that the phone compresses the voice signal to a narrow 4Hz band. This technology was developed by Bell to minimize the band with limitations of the early 50’s. Sometimes, simple amplification can overcome the high frequency loss limitations to understand voice over the phone.

You can try Captel w/o signing up to the service. I think you could then hear the voice then read the captioning on the computer. Once such service w/o any special equipment is sprintcapteldotcom. Once I have 15 posts, I can post URL w/o “dot.”

Yes I am using Captel and almost every available W/o technology tools for more than 10 years now. The problem is my last speech discrimination score dropped to %1. It panicked me that over using those tools made my brain so lazy to discriminate anything now.

Just some idea for people who wants to use some tools for brain practicing:
“LACE” is wonderful but won’t give practice on various accents and voices. It has practice for voice in noisy environments, but has only a male and female with perfect accent voice. I am thinking of using a tool that gives me more real life sample. Currently all my voice mails are being converted to text and email via “phonetage”. I saved my messages from various people who calls me daily. I am thinking those messages should be a wonderful tool for practicing. Specialty if the message is from someone whose voice is very difficult to discriminate.

By the way Richard, You have such a great speech discrimination of %84 for 90 DB in HF hearing loss. Do you use captioning or relay services at all? Or do you utilize just your hearing aid for all your aspect of listening?

I just spoke to a doctor and he gave me an interesting answer that explains some of my previous questions. Anyone who is not born deaf, has memory of sounds. When we read a sentence then listen to it, the brain decode the sounds from memory not from auditory signals.

From reading the discussion here, I know more about purchasing a lost hearing aid I need to replace. In fact much more about hearing-understanding-LACE-practicing with different speakers and so on. Many thanks to all of you, especially WishtoHear. I usually avoid icons but can’t resist. :slight_smile: