@x475aws: Reading @Volusiano s detailed posts (mostly about Oticon hearing aids) is what enticed me to join the Forum in the first place (so you can blame him!), and I dare say that I’ve probably read every post of his available through the Search function since I began lurking, early this year.
I’m sure one could dredge up the odd exception, but - speaking in general - I don’t believe that @Volusiano offers what could be construed as “advice about hearing aids”, other than in the most general terms of explaining - often in great detail - how they work, what certain features do, and what they don’t do. He never*, to the best of my knowledge, says : “You should get
this make over that one”, except when a member is in the verge of making a big mistake. @Volusiano 's caveats and disclaimers are many, his recommendations, few.
Once again, the conflation of your assertions leads the uninitiated reader to a false conclusion.
I don’t believe that a technical expert commenting on objective, technical “truths” or “accepted principles” has any ethical obligation other than to tell the Truth as best he understands it.
The day after Thanksgiving I spent most of the day talking alone to my sister in the same echoey kitchen where the loud, cacophonous family gathering had taken place the previous evening. I wore the Whispers for most of the day, switching between dynamic and custom (noise), and changing the volume up and down.
In that situation the mid-rangey tone of the Whispers did not sound that good. I was able to hear my sister’s words pretty well, but her voice sounded a bit unnatural and there was just a bit of the effect where if someone drops a pan into a sink it sounds like an explosion.
Toward the end of the afternoon I switched to the Mores. I liked the general sound of the Mores more than that of the Whispers. The Mores had none of the artificial sound of the Whispers. I thought my word recognition with the Mores in that situation was at least as good, and maybe a bit better, than the Whispers. The Mores sounded more “comfortable “ to wear.
In the evening I attended a smaller gathering at a different sibling’s in-laws’ house. I left the Mores on. This later gathering was six people in a much-less echoey environment. The Mores performed well in this situation. I could hear everyone’s words pretty well.
I stayed the night there. In the morning I started with the Whispers. Again the Whispers sounded artificial. I felt my word recognition with the Whispers was not quite as good as it had been with the Mores the previous night. I went for a hike with my brother and his wife. The Mores sounded good outside.
My takeaway from this weekend is that, so far, I like the Mores more than the Whispers, EXCEPT for that cacophonous Thanksgiving gathering where the Whispers were much better. I’m glad I still have another four weeks of the trial left.
So…Am I going to get more accustomed to the Whispers’ mid-rangey sound? Could the Whisper audi make them sound more natural, like the Mores? Would adjusting the Whispers to sound more natural reduce the SIN performance? Why do the Whispers sound so midrangey and artificial when at least some of Whisper’s people were recruited from Oticon? Will the next Whisper software update improve or fix this artificial sound? Are the Mores adjusted too extremely? I doubt I will get answers to any of these questions about the Whispers before the trial ends.
A lever system is fine if you have plenty of time. But if there’ll be another rock coming along, in 4 ms or whatever, then you need more power to get the rock out of the way faster.
For a truly irregular shape, cutting the area into a thousand rectangles is probably the fastest way to do it. You need a formula for the sides in order to use calculus.
Investing work to come up with the formula for a shape, or doing the work to compute the area inside it, isn’t going to help when you’re confronted with a different irregular shape.
This is where you lose me. You invest more resources upfront to more fully characterize the sounds in the millions of sound scenes, and train a DNN with it. Wouldn’t you then need more resources, not less, to analyze a live sound scene in the detail needed to make use of the info in the DNN? The fact that you’re undoubtedly sampling the live scene at higher than the Nyquist frequency has nothing to do with it.
Well, I think everyone who’s reported on Whisper has described its SIN improvement as more than “marginal”. And the current, actual price, without the Brain Trust discount, is $139/month, though I don’t know if anyone is paying that now. And the Brain, unlike an assistive device, doesn’t have to leave your pocket, and it may help with hearing someone whom you can’t hand it to.
So, say there’s a discussion here where someone trialing More aids complains that they can’t understand speech at a work lunch, or a bar, or in a factory. Given the four Whisper users’ experience here, do you answer:
Try this and this and this Oticon setting
and/or
Get a different brand that’s “beamforming”, and there’s nothing wrong with that
and/or
Some people have reported great SIN with Whisper, but it has these disadvantages?
I’ve already given a detailed explanation of my interpretation of how Whisper implements their DNN vs how Oticon implements their DNN. I don’t care to repeat it here, but I’ll give a link below so anyone interested can click on it to read up on it again. I’m not going to argue the details again here, we can agree to disagree on whether my analysis is correct or not. Other readers will be the judge of what they read.
But I’ll make a few observations below:
It’s common sense to deduce that a young, small start-up like Whisper probably doesn’t have the time and resources to fully and thoroughly train their DNN with enough data using a development platform offline that can be as big as to anyone’s heart content (a la Oticon More 12 million sound scenes worth of training that probably took years to collect and process). So they take a different approach and make the brain their “online” development platform (vs the Oticon offline development platform), invest to build a library worth of a few thousands hours of unique sounds (based on what I read from their whitepaper), but not collect tens of millions of sound scenes up front like Oticon did, then continue to collect data from users via the brain storage to continue to train their DNN.
So while Oticon did all the HUGE amount of training up front so they can afford to have a finalized version set in silicon, Whisper doesn’t have the resources to take this approach, so they choose a “continuous” approach of starting out with less training data to begin with, but keep on collecting more and more users data over time to perfect their DNN. So Oticon more or less have their DNN learn as much up front then “graduate” into the silicon, while Whisper has no resources to do this so they develop a system of “learning as they go along” then eventually graduate after so many years later with eventually enough users data collected.
So while Whisper spins all this stuff about the brain being a positive thing of being able to make major breakthrough improvement updates every 3 months and continually being able to collect real life user data; to me it seems more like it’s a beta version that is released prematurely and therefore needing update every 3 months to fix bugs and introduce more lacking basic features, instead of introducing major breakthrough every few months. And the user data collection bit seems more to me like a DNN not fully and thoroughly trained enough up front, so the only way to make up for their lack of resources to train and graduate up front is to continually collect data and train as you go.
Not really. But I guess you guys have no clue how the DNN works so I’m not sure if I’d bother arguing with you here about it. The whole point of training the DNN to almost perfection up front is so that you don’t need as much resources to achieve good result during execution.
I’ll give another analogy. A tennis player, he spends years to train his DNN to ingrain into his brain on how to make the perfect execution when he plays. When he goes to play on a tournament, you see him make amazing, unbelievable shots EFFORTLESSLY. He does NOT need to invest more RESOURCE to execute those perfect shots. He does it effortlessly because his DNN is already trained to do it gracefully and beautifully with minimal effort and minimal resource.
I still don’t think you’re right about this. And to use your analogy, as I and others have noted, the Oticon platform does not, in sound terms, make “amazing, unbelievable shots EFFORTLESSLY.”
But regardless (and it’s anecdotal), some people find the Whisper implementation to be more effective. The Oticon sound is natural, to be sure. But in terms of making speech intelligible in a noisy background, it leaves a lot to be desired (IMHO).
By the way, what makes you the expert on how DNNs work? Do you have a degree in computer science, particularly in AI?
Actually there was such a case recently and I answered all 1, 2, and 3. Below is the link to that thread. And if you look at post #2 in which I replied to the OP, at the end, I mentioned that the Philips HearLink 9030 and the Whisper are other options to try if the OP wishes to.
Neville suggested getting the Phonak Paradise with the Roger option after the OP shared that the More wasn’t helpful for him. I replied that it was probably the best suggestion after learning that the OP has asymmetrical hearing loss, and that’s probably why the More doesn’t work out as well for him.
I think you (@x475aws) chimed in to recommend the Whisper, but the OP balked at not wanting to deal with a brain.
So I’m not above doing either of the 3 options you mentioned above. While I have my own opinion about Whisper and I’m critical of some of the choices it makes (going with the brain and the subscription model), I’m perfectly OK with telling people about the Whisper option in case they want to try it.
Did I say that I’m an expert on DNN? You said that I have a flaw in my reasoning about DNN. Then I said back that you don’t know DNN the way I understand it. So tit for tat. That doesn’t make me an expert on DNN any more or less than it makes you an expert in DNN. We just agree to disagree.
When someone makes an analogy, you don’t take the literal saying in that analogy and apply it back to the original case. Of course the More doesn’t make any “shots”, regardless of whether it’s amazing or unbelievable or effortlessly.
And no, I don’t have a degree in computer science, particularly in AI. I only have an MS degree in Electrical Engineering. But one doesn’t need to have a degree in computer science and particularly in AI to talk about DNN in layman’s term, does one?
Thank you for posting this. It’s a beautiful analogy, but incomplete, and it tells me where your understanding of machine learning in hearing aids goes wrong. I look forward to explaining why it’s wrong, though I may end up using more tortured analogies while doing so. But I want to leave this alone for a while, so @ziploc isn’t drowned out in his own thread.
There was a thread (link below, at the end of post #40 from Neville) where the OP was making comments that the More has a more natural sound than the Phonak P90, and a debate ensued, with some argument that with the right HCP who knows what they’re doing, the HCP should be able to make adjustment to the HA to make it sound like another HA of another brand/model. Neville (one of our 2 frequent HCPs on this form who are very respected) said that give him a quiet hour, and he bets that he can make the More sound like the Phonak Pico Forte, but just the basic sound, not counting things like the automatic features or performance in noise.
So maybe there’s hope that your Whisper HCP can try to tune the Whisper to sound more like the More for you, if that’s what you want. There’s nothing to lose but to ask the Whisper HCP to try for you. But I had a debate with Neville about whether you want to keep copying the sound quality of another HA brand and model forever or not? It just doesn’t seem sustainable if it has to take a quiet hour each time there’s a major update that erases the previous work and requires the matching to be redone.
I agree with this. I admire @ziploc for his tolerance in allowing non-performance discussions to go on in his thread. Meanwhile, he steadfastly stays on-course and continues to share his experience with a lot of objectivity.
D_wooluf created another thread to discuss non-performance Whisper issues, so you can take this discussion there to continue further if you wish. But personally I see it as a futile exercise because analogies are just that. They’ll never be perfect and anyone can find holes to poke through them if they want to. They should be viewed in the general sense, so if you still don’t buy it, that’s OK and it’s OK to agree to disagree.
No matter how much you and I disagree, the fact is that Oticon did do a DNN and were able to implement it on silicon small enough to fit onto the HA, whether it takes more resources to execute or not like you seem to think it should. I’m sure they didn’t lie when they said they implemented a DNN and put it on the HA. Whisper and Oticon just do it differently, let’s just agree on that. Whether one way is better than the other or not, in the end, users will be the judge of it.
I think it all started because you asked me if I have an ethical obligation to disseminate the information in the Whisper whitepaper that “nobody but them right now can run a deep learning algorithm of their size and capability” or not? Maybe instead of a long and contentious answer, I should have simply said “No, because I don’t buy into it. But of course you can freely disseminate the information if you buy into it. I just don’t have any ethical obligation to do so because it’s not ethical to disseminate something I don’t believe in in the first place.”
And I’ve already answered the second question on whether I have an ethical obligation as an influencer to try Whisper myself. My answer is I’m game if Whisper approaches me for a complete no-string attached trial for my honest opinion. But I’m not going to seek out an HCP to initiate a trial myself if I don’t have any honest intention of using it even if I find its SIN superior to my OPN 1, simply because they have those 3 deal breakers in my book, and I don’t even struggle with SIN like you guys do in the first place. So it’s not fair to the HCP for me to play that game. And it’s also not fair to me to have to spend time to find a Whisper HCP who may or may not charge me a fitting fee.
Apologies if this has been dealt with and I missed it (lots of verbiage to skim through), but @ziploc’s boxy sound is very fixable is it not? The audi has all the levers they need.
I support @Volusiano’s responses here. His thoughts tends to be the most logically sound ones and he also admits faults when others point them out. I have a degree in AI and this analogy is elegant, although it does not capture AI performance nuances such as precision recall and other metrics when the hearing aids companies do not publicize them.
All in all, I am supportive of Whisper and hope they do well to collectively boost AI innovation in this space, which is inherently based on silicon and battery consumption tradeoffs. I would not mind making an iPhone a brain if possible perhaps in the long term.
Now I don’t claim to be an HCP or audiology expert, so take what I say with a grain of salt or whatever you believe in. I only say it based on my layman’s knowledge gained as a user and what I read on this forum and on audiology papers.
The sound signature of a brand is most likely shaped through its proprietary fitting rationale. If one wants to shape the Whisper sound to sound like the Oticon sound, I assume that the HCP would probably need the Oticon Genie 2 software, and he/she would need to run the patient’s audiogram through Genie 2 to see what the prescribed gain target curve looks like. Then when he/she does REM on Whisper for the patient, instead of following the Whisper gain curve target, he/she should make the REM adjustment to the Oticon gain curve target instead. I think that would achieve the goal of acquiring the Oticon sound signature.
Now @Neville was saying that give him a quiet hour or so, he can copy the sound of one HA brand/model to another brand/model. That sounds more complicated than just mimicking the target curve of the desired sound signature. So I don’t really know beyond my educated guess above. He’s copied here so he can chime in.
Do note, however, that whenever there’s a need to re-prescribe, whether it be due to a change in dome fitting, or change in audiogram, or perhaps an in-situ audiometry run, then the duplication process needs to be redone again, both on the Genie 2 side to generate a newly prescribed target curve, then on the Whisper side to adjust the REM results toward the Oticon target curve instead of the Whisper target curve.
So in theory I think it can be done. How practical it is depends on how often it needs to be done due to updates and changes and whether re-prescriptions need to be performed upon updates and changes or not. It may also not be perfect if the fitting dome on the Oticon side may not be exactly the same compared to the fitting dome on the Whisper side, so any adjustment on the Oticon prescription for the new fitting may not fit perfectly with the fitting on the Whisper side, unless you use the Oticon domes on the Whisper ear pieces.