That is great! But that tells me that there is still room for improvement in the fitting if your aids.
My audiologist said that too me about 3 years ago when we were working to the best possible fitting for my needs.
This is what I found. AutoSense drove me mad!
You’re right on that. The hospital did perform REM, so not too sure what else they can do. I shall ask them in an email.
I am not sure how to explain this, my audiologist uses the REM to verify that my aids preform as designed. My aids are pushing to the limit. My hearing loss dictates by Gene2 that the 85db receivers should work with properly fitted ear molds. I wear 105db receivers with semi skeleton ear molds that have very small vents. My aids are set to 100% plus of my audiogram. I don’t have feedback, wind noise, or issues understand speech. I can even go to concerts but i have to lower the volume of my aids to -2. I love to wear over the ear headphones over my aids again by lower the volume to -2. I go to lectures, meetings and church and hear it all very well with only my default program. Over the last couple of years I have had all extra programs removed from my aids. I can even watch TV with my wife, and having her set the volume to her requirements. She is in her upper 70s and I believe she has super human hearing. I keep pushing myself to understand speech by reading along while listening to audiobooks. I can now even sound out words now to spell them, something I wasn’t able to do for decades.
My audiologist is a professor of audiology at the state medical College and I have monitored most of the classes he teaches. I have learned alot and can explain to audiologist my needs. The VA clinic i go to now has 3 audiologist besides my audiologist. All of the new audiologist are his previous students. I feel comfortable have any of them work with me. But my audiologist always double check their adjustments.
I think you’re right that it’s a little bit semantics with the exception that as we increase the use of machine learning to decide how sound is managed rather than engineers deciding how sound is managed, knowing exactly what it happening at any specific moment is going to become more difficult because it won’t be based on parameters that humans set it will be based on parameters that “AI” set. Phonak presents theirs as “programs”, but the programs are smoothed and blended together so how much different is that than other hearing aids hiding the ways in which their automatic features are set to adjust? The issue with doing it Sonova’s way is that it gives the clinicians more power to adjust things, but this also gives them more power to make things weird (and pumpy).
I know. The setup is key.
It’s the person. I rode my e-bike today. So much wind noise. Terrible.