Terrible Hearing in Noise



That’s the diametric opposite of virtually all hearing aid design since the 1960’s and probably before as well.

Both directionality and multi-band filtering are trying to take out both elements of steady state input and babble sources.

What a significant group of users forget is that hearing loss is more like macular degeneration than simply an adjustment to your focal length. The underlying sensory mechanism is atrophying back to the nerve endings which is consequently diminishing your central processing function. Broadly speaking it becomes more difficult to hit one key on the piano, so the hearing system has to hit three or five keys harder. Without cleaning up the signal initially the garbled input becomes a garbled amplified output.

So - all systems do it, but the fact that the Opn maintains the directional field by selectively restoring the voices in multiple directions while selectively shaping the nulls between them is clever as it presents a sound field that seems ‘natural’ to the wearer - inherently, it isn’t. Acoustically, all the Opn has done, is headed through the jPeg window at the point where you don’t see the interpolation values (steps) any more because the delivered resolution is high enough to fool your brain that the picture is realistic.


Superb explanation.

One thing that you mentioned, the nerve atrophy, do you think this is inevitable for everyone with a sensorineural loss?

Let’s say someone who’d had a loss since childhood but did not use HAs but listened to music and had a job involving regular communication - e.g. sales. Would this provide enough stimulation to avoid atrophy i.e. use it or lose it?


And that is the best explanation I have read. Thanks!


Unfortunately where the underlying pathology has declined and killed the nerve fibre endings it’s already the case - the degree to which it becomes an issue is a function of the extent of the decline and the inherent mental plasticity. The only one thing I can say is that although you might be able to reassign overlapping nerves over time, when you have N -100 pathways there has to be at least a minor reduction in resolution, when that becomes N -1000 or N -10,000 it begins to have a far greater effect on the signal.


So it’s settled, we will all come to San Antonio and sample all the restaurant noise situations on the Riverwalk and the Market.


I think “use it or lose it” comes into play, but in a way opposite to what you’re thinking. If the brain isn’t getting signals that it can interpret as speech, it will lose the ability to interpret speech, or if it never had the ability, it will never gain it unless something like a cochlear implant is used.


I absolutely understand this.

What I’m trying to establish however, is the degree to which the atrophy occurs for someone with a SL who does not obtain HAs but maintains communication skills through their job and for example, listens to audio books and music with headphones.

Would this be sufficient to stave off further decline?

Furthermore, how reversible is this once treated with HAs? We know that it is a least partially reversible in the sense that one become accustomed to HAs and everything sounds ‘dull’ without them.

Maybe I should go back to uni and do a PhD on this… :slight_smile:

I’m sure funding would be secured if I mentioned that we need to take everyone on Hearing Tracker to Texas for ‘research purposes’. :rofl:


Thank you all for your experienced opinions regarding hearing in noise. It is a difficult, challenging, complex problem. Great discussion!

Clarity improves when objective data are collected and many hypotheses are tested.

We have only to note the vast improvement in earth-based telescope resolution by correcting the distortion caused by atmospheric turbulence. Lasers detect the distortion and correct the shape of the telescope lens to compensate.

A good obect lesson for hearing aid designers. Church echo noise requires different correction than restaurant clatter. My hearing loss is different from yours.
Noise is complex. Hearing is complex. Both are better described as vectors rather than scalar quantiles. Fortunately databases (relational) are good at describing and studying these quantities. Difficult problems can be solved by extraordinary engineering.

As an “elder” person,:grinning: I can testify that my internal computer (brain) does not work as well as it once did, so I need HAs that detect noise, analyze it, and then correct the sound that enters my ears. Do not assume that aging brains can compensate.


Up in the brain, everything is pretty much working by upregulating connections that are receiving input and downregulating connections that are not. (As the great Canadian Donald O. Hebb taught us, neurons that fire together wire together :smile:.) If you jump way up into the auditory cortex, the brain is still maintaining frequency mapping from down in the cochlea. If a particular frequency area is not getting input, then the other greedy areas around it will start to encroach. How much input that particular frequency area is getting, and thus how well it can maintain its mapping, is going to depend on the level of loss at that frequency out on the periphery. If you trust my memory, there is some evidence using reversible lesions in cats that if you allow those areas to atrophy and then reverse the lesion causing the reduced input, original mapping will sort of come back, but not precisely back to what it was. It’s a bit more disorganized, for whatever that means for function. Now in those experiments there was no actual damage to the auditory periphery, so reversal of the lesion restored normal peripheral function. Restoration of activation of those areas with amplification is probably going to be imperfect.

Now that I look back, though, are you talking about listening to audio over loud headphones akin to, say, part-time hearing aid use? AKA, How much part-time use does of amplification does one need to stave off any atrophy? (Or, rather than say atrophy, something like ‘neural reorganization’ seems a bit less loaded as the brain is just doing what it does best.) As far as I know that research area is still wide open. If you look at visual research–monocular deprivation in cats–you only need a much smaller percentage of normal binocular exposure to stave off the effects of monocular deprivation. Off the top of my head, I think it was something like one hour binocular experience per eight hours of monocular experience. Nature+Nurture: We are what we experience, but the brain is set up to ‘expect’ a certain ‘normal’ type of input and will weight it more heavily. One might suppose that something similar would happen in the auditory system, though what the absolute numbers are are anyone’s guess. My gut feeling is that the auditory system is a bit more plastic than the visual system, and therefore might need a higher percentage of normal input to maintain things.

Does this approach the sort of answer you were looking for?


“Hearing loss is more like macular degeneration than simply an adjustment to your focal length.”

Um_bongo’s simple but excellent analogy should be used dispel the common belief getting hearing aids will restore hearing just like getting glasses restores eyesight.

And thank you Neville and Um-bongo for your excellent and authoritative posts.


Great Post. Thanks very much indeed!


I’m of the opinion that HAs should reproduce all noise faithfully. It’s up to the brain to sort it out. Otherwise you end up with this situation when you’re experiencing the environment through a constantly changing graphic equaliser. :slight_smile:

I don’t normally post on forums but this is a core point for me. Please forgive my lengthy post.

For those who can process such I would agree this should be an OPTION. For the rest of us I couldn’t disagree more. I never realized how much sound is actually noise pollution until about 2 yrs ago when I was struck with SNHL.

For context my max loss is ~70db and word recognition is low 40s. I demoed many different aids and settled on a pair of Phonak Audeo v90s and Starkey Halo 2400s. I’ve tried custom domes and was disappointed to find little to no difference.

One of the initial side effects of my SNHL was that sounds that I could register were suddenly very different and I had to relearn how to interpret them. Airplanes sounded like flutes and slide whistles, dogs like alien creatures, and what I could hear of female voices sounded like a cross between the chipmunks and Charlie Browns teacher. interestingly over little more than a year my brain has remapped sounds and most things sound much more like I remember they used to but it is always like having an audio effects chain including a tunnel, a baffle and a fuzzbox among other filters.

It is important to keep in mind that there are two parts to hearing, volume and clarity. Hearing aids can address volume but can’t do anything for clarity other than reducing the volume of less desirable sounds. It even appears that their noise reduction is simply a lack of amplification rather than actual noise cancelation. Any industry programmers on this forum?

I have found no algorithms for audio “sharpening” or an inverse fuzzbox effect.

Noise is a HUGE issue for me. With hearing aids I still miss many more words than normal in conversations under the best scenarios. Telephone conversations are problematic under the best conditions. I can’t watch TV as I can’t understand enough of the conversations and soundtracks tend to sound like howling banshees. I avoid restaurants as with aids they are way to loud and loud restaurants are such horrible noises at high volumes without them. On the rare occasions when I do go out I take earplugs with me.

When driving I can not make out what song is playing on the radio or understand conversation with or without aids which brings me to my point. Most sound I do not want amplified! If the aids were to amplify conversation without amplifying the road/car noise I might be able to actually hear someone in a car. Most of the time my hearing aids are turned off no matter where I am.

Road noise is loud enough. The toilet is loud enough. Coughs and sneezes are loud enough. Machinery, fans (even a simple table fan), and one of my favorites I ran into at work, white noise generators are loud enough and all interfere with my ability to hear spoken words when amplified. A conversation in a car or listing to AM radio traffic reports simply isn’t going to happen with current technology/methodology.

Thankfully there are some (minor) efforts underway to address this for example Sonova last year conducted two clinical trials on noise reduction and is currently recruiting for another. And as I found out in a discussion with the manufacturing manager at Advance Bionics the industry is finally moving from the time domain to the frequency domain (used in analog form in radars and other applications since the 1950s and in digital environments since the 1970s)

As for a variable graphic equalizer PLEASE I NEED ONE preferably with many memory slots for presets (I am in fact working on making something along these lines). One setting fits all does not work well for different circumstances or even for different people under the same circumstance. I find the different “programs” (an abuse of the word BTW) to be a blunt tool yielding little real help. This is opposite of the industry mindset that hearing aids can and should self adjust providing “effortless hearing.”

I find my noise canceling headphones and graphic equalizer to be far more effective than my hearing aids for watching TED talks, training courses, and YouTube videos. Being adjustable “on the fly” to minimize undesirable sounds and correct for audio differences is key.

One hearing aid manufacture described the challenge they faced was like fitting an elephant into a suitcase. But I find I don’t want the whole elephant, I just want its ears.


It seems information here is sadly lacking, +1 (or more) to Neville and others who have addressed aspects of this including Um_Bongo. It appears there has been some research mostly focusing on the brain rather than the auditory pathways.

One of the most referenced is a study done in 2014.

Hearing loss is a real danger to us all in more ways than one.


My hearing loss is not (yet) severe. But how you’ve explained what is needed is what I already feel, just 4 months into this hearing aid “journey.” What I’m feeling now is that, in noise–my car and in restaurants primarily–I have far less speech understanding than before. And though the programs in my ReSound App change the soundscape some, they just don’t work to improve my speech understanding. What I wish is, that instead of research to have my hearing aids call 911 if they think I fell, or translate Spanish to English, or turn on my coffeepot, that I could understand speech better. If hearing aids could be programed to recognize the voices of my family and friends and to find and amplify those voices in noise, THAT seems like a technological advance I’d embrace. Maybe I still couldn’t order the special of the day because I still won’t understand the server, but I’d at least be able to hear and participate in the table conversations better.

I don’t yet understand most of the technical discussions on the forum. But I think I know that no manufacturer has this yet. I just hope they’re working on it.


Noreen, what has your pro tried so far? Usually if I have problems I ask for the noise reduction program (speech in noise or noise/party program) to aggressively lower background noise. Sometimes it seems the pros do not want to stray too much from the initial fit but if I want to hear in noise then they have to crank up the noise reduction and that makes voices much more prominent. It would need to be the separate program, not part of the everything program.


Thank you for your concern. I’ve had a lot of problems with domes backing out of my ears (I have custom silicone moulds since the end of November) and those problems have probably overshadowed my issues with noise in terms of interacting with my audiologist. She did tell me that she increased the noise reduction because, in quiet, I was hearing a constant sound like a faucet running water (which she thought, as I understand it, was me hearing the sounds the HAs make), She’s a solo practitioner, on maternity leave for 6 weeks. 6 more weeks to go. Then I am going to stop whining about my dome/mould issues and ask her to address my speech in noise issues. I like the idea of asking her to create a special program that will aggressively lower background noise. I also don’t think I ever had speech-in-noise testing. I’d like her to do that too, if she can, because I think it’s the hearing aids and not (exactly) my ears that are making understanding so difficult. I don’t know exactly why, but I feel like that might make a difference in how she treats the problems.


I’d see if she has your global profile as “First-Time User.” You might not like it but I have switched my global profile to “Experienced (Non-Linear),” updating both gain and target curves. The option is in the Patient Profile and it would be very easy for your Audi to switch you back and forth. I think that I hear better overall after the switch. More amplification in mid tones for my loss at moderate volumes in All-Around program.

You might ask the Audi to fit you in a difficult listening situation using the Remote Assist feature. If your audi is unfamiliar with this, ReSound has a number of online courses at Audiology Online through which she could bring herself up to speed. Or if she’s too busy with her new baby, maybe she could temporarily refer you to a colleague?


I plan to make time to watch those video links you posted on the Quattros in another thread. This audiologist works with 2 brands, once being ReSound. But I am her first Quattro patient. (We did use the remote assist feature for one tweak–the process worked well.) I didn’t know about the global profile difference. Interesting.


I’m pretty ignorant about fitting but I also noticed the option in the fitting software when a specific environment containing speech is detected to favor the better ear. Have no idea what all the ramifications of doing so might be but (not remembering all previous posts) whether the difference between your ears could be part of the problem but your degree of hearing loss seems relatively mild to moderate with not a terribly extreme difference between ears. Maybe your audi could advise you whether favoring the better ear in speech situations might help or whether that’s going to create more problems than it might solve.

I find in listening to audio podcasts that I get the best speech recognition using the Outdoor program with the speech clarity quick setting. I find it helpful to dial the volume back a bit to avoid any tinny sound or danger of feedback (for me, the Outdoor program has the most amplification and brings me closest to the amplification zones where I am in danger of feedback, but also matches my prescribed fit, according to ReSound software, most closely).

In certain noisy restaurant situations, I like to use the Restaurant program instead, where I can dial in both directionality and noise reduction to my liking, and in very bad noise situations, I have a Multi Mic remote microphone. But in some very bare wall, bare floor, bare ceiling, bare table restaurants it seems like sound gets reflected off the ceiling or nearby walls right into the Multi Mic so just going back to a program works better for me sometimes in terrible reverberating noise situations. Apologies if you’ve already read this in other posts of mine. Just wanted to repeat my experience in case it can help you.


I only have Resound Live 9, Resound linx 3d 7 and Oticon Opn 1 Hearing Aids to compare.

Resound Live 9 - I always struggled in noisy environments as everything was too loud, but I could manage to follow the dialogue to a large extent. But I know what you mean by noisy environments being a challenge for HA wearers.

Resound Linx 3d 7 - they were absolutely awful. It felt like I could hear everything except what I was trying to hear! I could not hear properly in noisy environments with these hearing aids. All I could hear was the background noise and I struggled to hear what people were saying. They were awful. My old Resound was far better.

Oticon Opn 1 - was like a dream come true. I love wearing these in noisy environments. For the first time in years of wearing hearing aids, I don’t feel overwhelmed with background noise and I find it so easy to hear what people near me are saying or whichever direction I am looking in. I fully agree with many people here - Oticon really shines for me in noisy environments and for me is vastly superior to Resound.

Resound also seems to pick up things like wind noise and other background that I don’t want. Never had wind noise issues with my Oticons.