My Whisper AI trial vs. Oticon More

I’ve already given a detailed explanation of my interpretation of how Whisper implements their DNN vs how Oticon implements their DNN. I don’t care to repeat it here, but I’ll give a link below so anyone interested can click on it to read up on it again. I’m not going to argue the details again here, we can agree to disagree on whether my analysis is correct or not. Other readers will be the judge of what they read.

But I’ll make a few observations below:

It’s common sense to deduce that a young, small start-up like Whisper probably doesn’t have the time and resources to fully and thoroughly train their DNN with enough data using a development platform offline that can be as big as to anyone’s heart content (a la Oticon More 12 million sound scenes worth of training that probably took years to collect and process). So they take a different approach and make the brain their “online” development platform (vs the Oticon offline development platform), invest to build a library worth of a few thousands hours of unique sounds (based on what I read from their whitepaper), but not collect tens of millions of sound scenes up front like Oticon did, then continue to collect data from users via the brain storage to continue to train their DNN.

So while Oticon did all the HUGE amount of training up front so they can afford to have a finalized version set in silicon, Whisper doesn’t have the resources to take this approach, so they choose a “continuous” approach of starting out with less training data to begin with, but keep on collecting more and more users data over time to perfect their DNN. So Oticon more or less have their DNN learn as much up front then “graduate” into the silicon, while Whisper has no resources to do this so they develop a system of “learning as they go along” then eventually graduate after so many years later with eventually enough users data collected.

So while Whisper spins all this stuff about the brain being a positive thing of being able to make major breakthrough improvement updates every 3 months and continually being able to collect real life user data; to me it seems more like it’s a beta version that is released prematurely and therefore needing update every 3 months to fix bugs and introduce more lacking basic features, instead of introducing major breakthrough every few months. And the user data collection bit seems more to me like a DNN not fully and thoroughly trained enough up front, so the only way to make up for their lack of resources to train and graduate up front is to continually collect data and train as you go.