A few years ago, I was fitted with Phonak Nathos (NHS-equivalent of Phonak Naida). Terrible hearing aids. But this was the main reason why I got into hearing aids fitting DIY. NHS upgraded me to Phonak Naida after I was able to demonstrate to NHS that Nathos were of poor sound quality compared to Naida, despite Phonak telling NHS that both were exactly the same aids, just under different names. But why call it a different name…? Anyway, with both hearing aids, NHS whacked on all the default settings, SoundRelax, Windblock, etc etc. I had to get NHS to turn some of the features off.
Anyway, roll forward a few years. I’m now sporting Signia Motion C&G 5X from NHS last week. Again, they have whacked the bog standard features on, of the equivalent SoundRelax, Windblock etc. Presently waiting for NHS to roll out updates to my aids remotely before I start wearing these properly.
It made me think.
Childrem who are fitted with hearing aids for the first time. They will have those default features turned on. i.e. Soundrelax as an example. This means sudden noises are turned off like door shutting, coughing, clapping hands. Yes, useless stuff, but useful for awareness and basically the facts of life and normality with what we would hear, if we weren’t hard of hearing.
There is even a feature where it focuses on making sounds louder in a direction on making sounds louder if it is more important. Unfortunately it usually gets that wrong so I have that turned off (to omi-directional in all environments which are usually quiet), Directional only applies in noisy environments in my case. I would consider this feature quite life threatening if directional was turned on i.e. crossing a road, car approaching would get quieter!!!
Turning to children’s brain development, these are crucial in the early years. Having these attuned to the incorrect settings where they assume the incorrect settings are normal and thus set them up badly for life?
In my case, I was fitted with analogue hearing aids. So sounds comes in, gets amplified and outputs it, without being messed with. That’s all it does. My brain can learn to process the rest.
I can change to a program to enable those features I described to help me if I struggle, which is of course, a fantastic thing to have. But in reality, in most cases, hearing aids gets it wrong on what it thinks it is important and what is not. The only way to solve this is to have AI linked to the brain to tell the hearing aid that the sound cut out at that very moment IS important. At the moment, hearing aids work independently of the brain’s sound processing capabilities, therefore it is impossible for both to work together effectively without the end-user to have direct control, such as those of us who are lucky enough to be able to do DIY to fine tune the settings to meet your requirements and needs.
I just find it concerning that the audiologists, who have naturally never worn the hearing aids “in normal environments” to truly understand the features/loss of quality we experience, but we would never know…
Do we have deaf programmers involved with the hearing aids testing process rather than the hearing aid salesmen telling the pool of selectively picked end-users what they should hear? I sincerely hope so… I wouldn’t want my child to grow up not knowing what the natural sounds of general things in life would sound like.