User Review of Whisper Hearing Aids

Whisper doesn’t use frequency shifting because they don’t rely on frequency data. They believe that their use of artificial intelligence is a superior approach. It’s explained in this white paper, “Beyond Frequencies, Artificial Intelligence, Sound Patterns, and the Whisper Hearing System”:

My take is that they rely on frequency data, but not ONLY frequency data. I’m not saying frequency lowering is the be all, end all, but if one cannot hear sounds at certain frequencies and no modification is made to make them audible, one is not going to hear them. That information will not be available. It is quite possible that the information will be of minimal use or perhaps counterproductive (distortion) if it is made available so I’m not saying that Whisper is wrong. They just don’t seem to address the issue.

1 Like

With all due respect, I think you totally misunderstood what the Whisper paper is trying to say, Bill. They’re NOT saying they got rid of frequency-based signal processing altogether and replaced it with AI and sound patterns. Signal processing will forever be the inherent foundation in which hearing aids are built on, from the old trusted analog HAs to the digital HAs forward to the modern AI-based HAs and beyond.

Signal processing to hearing aids is as fundamental as round moving wheels powered by a motor to cars, be it the Internal Combustion Engine or the electric motor in EVs or hydrogen-based vehicles. It is the FOUNDATION that involves reading in sound signals and processing them and amplifying them. Perhaps an analogy of AI in hearing aids is the self-driving AI that Tesla and many other companies are trying to develop. It’s building ON TOP of signal processing, it’s not replacing signal processing. That’s why the title of the Whisper paper is “BEYOND frequencies …”, not “REPLACING frequencies”.

Oticon does AI, Philips does AI, Widex does AI, and yet they do still offer frequency lowering just the same. They’re not mutually exclusive such that if one has AI inside, then frequency lowering is not needed anymore, as implied in your statement. AI is not here to replace signal processing. AI is here to take the hearing aid technology “beyond” signal processing. It’s about the level of abstract. Instead of manipulating sounds at a lower level of abstract and dealing exclusively with the fundamentals of signal processing, it’s now taking it to a higher level of abstract and starts dealing with sound patterns to improve clarity, that’s all. But then afterward, it still has to come back down to earth from cloud 9 and use signal processing to deliver the end result in the form that can be heard by humans.

Let’s say with my quite severe to profound ski slope hearing loss at around 4 KHz and above, no amount of superb ground breaking AI is going to suddenly help make me able to hear those 4 KHz plus sounds at 4 KHz plus. It’s still the signal processing aspect of the frequency lowering technology of those sounds from 4 KHz plus down to the range of 1-2 KHz where I still have barely adequate hearing that will enable me to hear the high end sounds. To this effect, the Speech Rescue frequency lowering aspect from signal processing from Oticon will let me hear that. But no amount of AI from Whisper or Oticon or anybody else, WITHOUT any frequency lowering, is going to let me hear those 4 KHz plus sound.

So if Whisper doesn’t have frequency lowering, then OK let’s just admit that it doesn’t. But to say that AI can and will replace frequency lowering is a total misstatement.

2 Likes

Okay. Sorry that I misunderstood.

Here’s something else I don’t understand. Whisper.ai clearly says in the report of their first upgrade (v1.1) that they improved their compression system. If they’re using compression, doesn’t that mean that they’re doing frequency shifting?

No. Compression (as opposed to frequency compression) means that soft sounds get more gain than loud sounds.

Compression and frequency lowering are two entirely different things, Bill, serving 2 different purposes.

There’s a current thread that touches on compression that you may be interested in reading up on: Self programming? Not sure how much longer I can deal with the "muffleness" of the compression - #15 by codergeek2015. In there, @Um_bongo and @Neville contributed a lot of professional insights into how the various HA mfgs deal with compression, in different ways. I wouldn’t be surprised if Whisper comes up with their own compression scheme which they think is superior to others. But all the HA mfgs probably think that they have a better compression scheme than their competitors.

Frequency shifting is a part of frequency lowering, but frequency shifting can be used somewhere else in a much smaller effect as well, like in feedback management where a strategy is to shift the frequency of the sounds by 10 Hz only to help reduce feedback. So that’s why I keep using “Frequency Lowering” to refer to the question asked (whether Whisper has any), instead of use the term frequency shifting. Like compression, different HA mfgs come up with their own frequency lowering technologies as well, and they probably also think that theirs is superior to others as well.

But to answer your question, WDRC (wide dynamic range compression) is not about frequency shifting. It’s about adjusting the amplification of signals over different frequencies to match the hearing loss of the individual to provide the most comfortable hearing experience without excessive loudness that can also result in feedbacks. So that doesn’t mean that Whisper is doing frequency lowering if they’re doing compression.

One strategy on frequency lowering is frequency compression, as employed by Phonak and some others. But I’m fairly sure that Whisper is probably talking about WDRC and not frequency lowering when they mention compression in their paper.

2 Likes

Thanks for the explanation.

I called Whisper with this $69/month information in hopes that I might be able to negotiate my current subscription price down. (Long shot, I know.)

I was told that the $69 price only applies in limited circumstances, i.e. where audiology services are being provided via TeleHealth and then only in a limited number of states - about 10 or so.

Does this match with your trial experience, Abarsanti?

@billgem: Bill, isn’t your “Brain” device pretty well a dead-ringer for this

image

(Just wondering …)

[Hmmm … looks like they might be Canadian :canada:, too, with a name like “The Ontarian”]

$69/month for 36 months is the price on the whisper.ai site.

Yes. :rofl: :rofl: :rofl:

Yes, I see that. And there is also an option of $2499 flat. I guess I’ll be giving them another call in the morning!

2 Likes

I haven’t yet discussed pricing. I was only going off what was listed on the website.

I believe it was Otarian. But before my time…

https://hearingaidmuseum.com/gallery/Vacuum%20Tube/Otarion/info/otarione4.htm

There are a lot of people who like the tone of tube amps for their music. Where are these folks in the hearing aid realm? :wink:

WH

@WhiteHat: I have always only used Class A or AB tube amps, with the exception of my Ampeg bass head. That said, if my life depended in being able to distinguish between Cat, Bat, Rat, and Gnat, delivered by a guy with a good mic and an amp, I’d make sure it was through a solid state amp.

But for music, ain’t nothing like that “tube sag” (which is just a form of compression)!

1 Like

It’s a confusing situation. Pricing, I’m told, is though the audiologist. No good explanation for why $69/mo or $2499 is what’s on the website, but apparently the price a customer ultimately pays may be very different than what’s posted. That’s the best I can come up with.

@billgem: For YOU, $297.44/mo!

1 Like

I went to the Whisper.ai site, but it’s still not clear to me how this system works.

Are the sounds captured by the “brain” and sent from it to the hearing aids? It’s correct?

So if it works as a kind of streaming of surrounding sounds, isn’t carrying the brain" in your pocket a problem? Can you identify where the sounds come from?

Since the power is entrusted to the “brain”, why is it not suitable even for deep losses?

Thank you

1 Like

Sounds are captured by the usual microphones on top of the hearing aid body. The hearing aids send some distillation of the sound to the Brain, and get something back from it that’s used to generate output to the receivers. Sound localization is pretty good for me. The hearing aids can also be used without the Brain, with some loss of function.

Whisper recently increased their fitting range to 90 dB. Their website credits an improved feedback cancellation system. At my last visit my HCP also mentioned that they’ve introduced higher-powered receivers. I don’t know what else they would need in order to cover losses like yours.