@jim_lewis (also, check correction at the end)
Resound works on one base concerning gain curve and then modify it by features (like noise cancelling etc)
Phonak has ability to have more bases (gain curve) and modify by features. But they all still depend on the chosen formula, eg you cannot have one program fitted with NAL-NL1 and another with DSL 5. Dependency is seen in terms how adjacent frequency behaves when you change gain at one frequency handle. Formula determines behaviour of the curve. You cannot enter the values directly as you’d like, like put 10 at 250Hz and then 100 at 500Hz and then 20 at 1000Hz.
Different bases for phonak means that you can have eg calm where gain at 250Hz is 20, and music where gain is 40 (all other left same or changed automatically because of non ability to put discreet values).
You cannot do that with aids that work from single base, like resound is.
But even with phonak, when you do verification with REM-speech mapping aids are in special mode for that, you don’t program some program but aids, the base.
Maybe oticon really has ability to do completely independent several bases. Can’t recall that I’ve heard about it though.
Ok, correction, single base and multi base I picked up was for the automatic switching.
Still, base formula for phonak is the same. You can change gains but behaviour how curve moves will follow the formula.
For resound I concluded that I don’t have the ability to change gains per frequency, but granted, I didn’t stare at the programming software for days. I concluded that they don’t have automatic programming as phonak which I like, so I closed it.
Easiest way to compare two formulas woud be to fit two aids and swap them for comparison in different situations.
@julieMK
If you haven’t already, I highly recommend this guy, and also value hearing channel (and their blog) to gather information.
In my experience, paradise and marvel are the same from hearing abilities. But I have good high frequency hearing so I wouldn’t notice if they did something different.
One thing I’ve noticed is that we don’t use same words to describe same sounds. Eg tinny, distorted, sharp and so on.
If you aim for best speech comprehension, then chase that and ignore if people sound like Mickey mouse. After a while it’s all be your new normal anyway.
Earlens use light to cause your eardrum to vibrate. Hearing aids use sound.
But, for you to hear, your eardrum has to send vibrations through the middle ear bones, they excite cochlea and it sends signals to the brain.
If only cochlea is damaged (no matter which frequency), it really doesn’t matter how eardrum was excited in the first place.
One use case where earlens could shine, is if their optic device is tiny - for people who have huge loss but tiny ear canals who couldn’t use big receivers. Or in general have better domes/molds option for people with really small canals.
But I have no clue what’s the size of that part for earlens, so might be quite the opposite.
Or if someone has some really problematic ear canal, I can’t even imagine what would that be, where sound could be messed up but light wouldn’t during the travel on that short distance between output point and eardrum.
Or if independent of that tech, their mics and software are insanely better for eg speech in noise situations. But I highly doubt that. If they did something so revolutionary, it would be on the headlines already