I’ve grown tired of you two, who obviously have too much time on your hands. Get a life.
@happymach:Why not just chop up your soapbox, set fire to it, and enjoy some hot dogs and marshmallows?
Yep. And you’d think a couple of devotees of Oticon products would be favorably inclined towards the product of the company that snagged Oticon’s head audiologist and lead noise reduction engineer. But it seems to be just the opposite.
Yeah, Oticon has planted a mole, and pays him (or is it her?) to write anti-Whisper comments on Hearing Tracker, that major player in U.S. and european politics and policy. (eyeroll emoji here).
OR…some folks are simply unimpressed with Whisper, or anyway have reservations. C’est la vie!
This has gone way past disagreement. It’s active trolling and vandalism of the topic. If moderation isn’t going to happen, does anyone know how to block specific users? I can’t seem to find anything. Not the best solution because I’d still see replies to those users, but at this point I’ll try anything.
I wanted to update this by saying that the receiver failure was my fault, caused by using too much Miracell ProEar on the mold and at the entrance to my ear canal. Figured that out after another failure, duh. Both replacements were done on short notice and with no cost to me.
Here’s the thing:
I’m actually an IT professional. I started programming in 1972 and am now a college professor.
Based on current technology, to take a 30 minute recording of voices with background noise, eliminate the background noise completely, leaving perfect, undistorted, natural voices takes over 40 hours on a fast computer.
Obviously, the Whisper brain has significantly more processing power than any hearing aid, so theoretically and practically, the Whisper brain should be able to improve speech to noise performance, but…
Real-time processing is far more challenging than 40 hours to produce 30 minutes of clean voices.
I don’t know how effectively Whisper uses that processing power or whether it’s even enough to make a big difference over a Roger microphone much closer to the person speaking.
I’d say that Whisper is a great concept in its infancy. I have no way to evaluate how effectively they’re using the brain nor what’s actually required. Back to my original example, if it takes 40 hours to get that 30 minute recording perfect, does it take 1 hour or 35 hours to get it 95% perfect? How clean does it actually have to be for a hearing impaired person to get a significant benefit from it?
If I were Whisper, I’d consider putting microphones in the brain, to combine the computing power with reducing the processing needed. Keep the brain in your pocket with the microphones off. Pull it out and put it on the table or hold it to use the microphones.
A question:
Does Whiper have a telecoil?
I’d be curious if the Whisper might really knock your socks off if you used an assistive microphone to reduce the amount of processing the brain had to do.
The Whisper actual users (I’m not one) should be able to answer your question about the telecoil. But I don’t recall hearing about the telecoil feature in the Whisper ear pieces, although it doesn’t mean there’s no telecoil.
@Volusiano: Just going by Whisper’s promo pictures, you wouldn’t think that lack of space in the earpiece cases would be a constraint.
Can’t say for sure, but when I tried them out as part of a study, no mention of Telecoil was made. (They definitely do not have Bluetooth (for streaming)) I have not seen any mention of telecoil on their website either.
Whisper has Bluetooth, phone only for now. It’s via the Brain, not directly from earpieces.
Yes, I should have specified that I meant for streaming. I presume it uses Bluetooth for the app? They asked they I not set up the app for the study.
Yep, the app uses Bluetooth.
Answer: No, it does not.
That being the case, how can any hearing aid improve the listener’s ability to hear speech in noise? All premium hearing aids claim to be able to do that to one degree or another. Frankly, how does any human being with perfect hearing deal with speech in noise? No one is able to listen to speech in noise and have their brain eliminate the noise in the way that your fast computer does in 40 hours.
As a Whisper user, what I find the real challenge to be enabling the listener to hear speech in noise when the background noise is more speech as is the case in restaurants, bars, sporting events, work parties on clean up day at my club, etc. My experience is that I can hear the loudest voices the best, but not necessarily the low talker next to me. (Remember the Seinfeld episode and the puffy shirt?) The loudest voice might be 20 feet away. I can hear it and understand every word, but a nearby voice might still be challenging. The AI is designed to separate speech from noise, but separating speech from speech/background noise is a whole other level of challenge.
What you’re talking about, Bill, is what I think the HA industry calls “babble” (undistinguished speech noise from multiple speakers around you). I saw a video from Don Schum a long time ago on Audiology Online that talks about this. I mentioned it in one of my posts (around post 40 or 41) in this thread here → https://forum.hearingtracker.com/t/oticon-more-my-first-experience/
Below is an excerpt of that post that I wrote, for reference:
"I think spatial acoustic (to provide directionality on the voices) is not the only thing that can help or will help with the cocktail party effect. People also rely on other differentiations to help them single out and tune in on a single voice. I remember watching a presentation by Donald Schum (from Oticon at the time) from Audiology Online, and he offered another point of view that it’s better to get more speech cues from the surrounding environment that are different from the target speech cue, which can also help the listener be able to differentiate in order to isolate and focus better on the targeted speech cue.
I think what he was saying is that front focus beam forming helps with isolation toward the front only, but if the front has the babbles of voices (from around) diffused in with the targeted speech in the front, it can still be a challenge to separate out the diffused babbles from the targeted voice. But if the surrounding voices are made more clear to the listener right up front (not meaning toward the front, but in the first place), instead of getting blocked out by beam forming and reduced to a mix of babbles diffused with the targeted speech in front, then maybe the addition of the surrounding voices (that has better clarity and not reduced to diffused babbles up front) can help the brain hearing differentiate and single out more easily the targeted voice to focus in on.
What I got out of that presentation from Donald Schum was that it’s better to have more acoustical information to present to the brain for it to discern from and separate out, than to hide the acoustical information and starve the brain of the info it needs to sort things out."
Well, I’m no expert. But I would think that it’s really a matter of the mind learning to sort out the stream of sensory impressions that we all encounter in various situations. And we naturally ‘select’ what’s relevant and important, and ignore the rest. Otherwise we’d be overwhelmed.
So, the same would hold true with the soundscape as heard through our hearing aids. We learn over time to select the relevant sounds and focus on that. So either an ‘open’ soundscape, or a ‘filtered’ one that tries to emphasize speech that’[s near at hand while suppressing background babble both require a period of adaptation. We have to learn how to hear with our given soundscape.
But we’re told, or have come to believe, that our devices will perform this work for us. so we expect our new HAs to immediately improve our hearing in noisy situations. I don’t think this is possible.
I"m thankful for your description of your experience with Whisper. I’m thrilled that you’ve had improved hearing in noisy environments. Music to my ears. I wonder to what extent your positive experience is related to your audiogram. My hearing drops off substantially in higher frequencies and, I’m told, this makes key sounds in speech more difficult to pick up. I hope someone with a more sloped audiogram takes the time to make a report as comprehensive and thoughtful as yours is.
Are there any more end-of-year performance accounts from the Whisper subscribers ?