Difference is that oticon isn’t just using the training, what phonak did, but they are actually supposedly have DNN in the aids themselves which does recognition based on training done. But so far I didn’t saw the paper that covers what they exactly do.
What process phonak uses to put that training into the practice, no clue. My guess is by parametrization or something, I remember reading about how they trained it but forgot if it was mentioned how exactly HA works with it, but it certainly wasn’t DNN, otherwise they’d be screaming about it for a long time already
So for autosense 5 we might expect to see some improvements in scene detection…
And one degradation between 3>4 jump is that now you can’t tweak soft noises per frequency, but have just one slider for all soft noise in general. I liked the previous better since I’d cut differently lows and highs and mids I wouldn’t cut at all.
LOL about first, don’t they always start with ‘we’re the first’, and ‘leading manufacturer’… they’re funny