So Volusiano, was all this sound processing and AI razzmatazz in the OPN model I’d tried out way back in like 2018? Cuz I could not for the life of me understand any speech in any kind of noise (Costco check-out, doc’s office with HVAC on and sounding like a roar, restaurant with a few patrons, shopping malls). I simply could NOT comprehend speech even if the person was facing me.
I sure wish you knew Phonak Marvel sound processing like you do Oticon. Cuz that is also by no means perfect for discriminating SPEECH in any kind of noise (echo/reverb like the house I live in, busy restaurants, etc.,).
I’m on the brink of just getting a Roger accessory and pointing the dang pen at everyone like Bob Dole! Speech in noise continues to be my Holy Grail, and even my new audi says the Phonak Paradise is not really that much better than what I’ve got right now. So much for the nuances of improvement in HA tech over time.
Any insights here are most welcome. Granted I have cinderblock ears and even the audi was A S T O U N D E D that I do as well as I do with these aids given my dismal hearing.
But we were sitting in a quiet exam room, so hey, I can do that!