Well, since I particularly enjoyed and want to share those 2 podcasts I mentioned, I don’t mind summarizing the key points they made in them below.
1) Why Is Noise So Difficult?
In noisy environment, what you’re trying to do is to create a map around you of the activities in the environment that generate sound to isolate different sources of sounds so your brain can decide what to focus on and what to ignore. That’s the basis of the cognitive task of how a normal hearing person processes noise from their target sound. The integrity of ALL sounds is crucial for the cognitive function to isolate and focus on the desired sound to be targeted.
The outer hair cells provide a sharp tuning for the inner hair cells so that when multiple sounds come in, the inner hair cell can make a clear and easy distinction between the sounds to help facilitate the brain’s cognitive function. For sensorineural loss, the loss of outer hair cells dulls the tuning and makes it harder for the inner hair cells to make clear distinction of sounds like before. The loss of inner hair cells next makes it even harder, blurring the sounds further so that instead of recognizing distinct sounds, it becomes a blurry combo of sounds that all meld together.
You can best describe sensorineural hearing loss as the inability to organize sounds. It’s not so much what the patient doesn’t hear, but rather it’s about what the patient can’t do, which is to take all that sound that comes into the peripheral auditory system and separate it into different sources… You can’t focus on what you want to focus on because the auditory system doesn’t allow you to be able to resolve all the cacophony of sounds that are coming in into separate sources.
2) Hearing Aid Technology and Noise
The traditional directional technology (destination based) can help block out sounds in the back and sides and help focus on sound in the front and improve the signal to noise ratio to a certain degree for a patient, but it doesn’t really make the noise go away altogether. The second approach is to give the patient as much information as possible, because as mentioned in the previous podcast, the task of understanding speech in a noisy environment, especially in the competition of other talkers, is primarily a cognitive task. The cognitive system likes to to get as much information as it can get its hands on. The brain, if it can get as much information from the auditory system, can sort through that competition and try to disentangle it from the speech signal of interest. Sensorineural hearing loss is going to put a major limitation on that function, but if hearing aid technology can’t make the noise go away, then the next best thing it can do is to provide the auditory system with as much information, and as CLEAN as possible, and let the brain do what it does very well.
It turns out that the cognitive system is very good at “glimpsing” signals, or looking for gaps between the competing speech to identify that target speech in between these gaps, with all the other cues (visual cues, linguistic cues, situation cues), and linking and putting it together into one overall stream to follow over time. So any amplification strategy that restricts the bandwidth of the device is actually going to decrease the number of opportunities that the listener has to “glimpse” the signal.
3. Fitting Low Frequency Hearing Loss
I won’t go into too much details here, except that the point of this podcast is that for low frequency hearing loss, they have found it more effective to fit the patient with what they can still hear well (the high frequencies) instead of trying to do the conventional thing and try to fit the patient by compensating what they can’t hear (by adding gains at the low frequencies). It may seem counter-intuitive, but they’ve found that this strategy actually works better for these patients when it comes to better speech understanding by focusing on maximizing the patient’s residual capabilities instead of compensating on the patient’s hearing loss.