I have found the More1 aids to have really helped me with conversion and speech understanding. But I have also found that speech therapy and reading along with audiobooks as a huge help too.
Interesting. Which hearing aid brands might implement these chips?
Off-topic: (for sure!) Like your new avatar, Chuck. First time I’ve noticed it. Nice outfit, especially the string tie. Now if we can only get Mr.@SpudGunner to show his true self! (maybe you guys can start a big trend?!). If Jim does it, then maybe @MDB, @Volusiano, or who knows who will be next?
My wife took that to show off the fact I have lost enough to be able to wear presentable clothes.
Sorry, Jim, that IS THE SPUDGUNNER.
He took that selfie with his iPhone! Right @SpudGunner ?
@flashb1024: Yup - that’s correct.
hmmmmmm… Whisper AI?
My reputation for gullibility precedes me! Apologies to Abram for derailing a serious and interesting thread.
The authors of the Nature article are associated with the German branch of a company called Audactic. Restoring speech intelligibility for hearing aid users with deep learning | Scientific Reports (link to author information for paper)
I did a Bing/ChatGPT search to see what hearing aid company Audactic might be associated with and got the following answer, which I’ve edited for brevity:
I’m sorry, I could not find any information about a German company called Audactic that is associated with hearing aids. The only Audactic I found is a Swiss company that develops systems to intelligently modify sound using artificial intelligence¹. They claim to provide a solution for millions of hearing aids users, but they do not seem to be a hearing aid company themselves¹.
Source: Conversation with Bing, 2/16/2023(1) Home - Audatic. https://audatic.ai/ Accessed 2/16/2023.
Interesting. Sonova is a Swiss-based company (the parent company of Phonak). Research has to be world-class to get published in Nature. Interesting that they expect to get stuff that now requires a laptop (not even a smartphone!) to run on HA’s “within a few years.”
If you go to the Audactic site, you can play noisy sounds on the left of the screen and then listen to the AI-processed and cleaned-up speech on the right. It’s pretty amazing. The hearing aid that comes out with this tech, which doesn’t depend on beam-forming and works with 360-degrees all-around, is going to take over the world! The Nature paper intro says the processing only requires a SINGLE microphone to listen to the sound and capture it for processing.
Edit_Update: Hearing aids, smearing aids… The product would sell a ton of earbuds, too, for the speech clarity it brings to listening in noisy places. Maybe there’s a similar web demo page for Whisper AI, too, @d_Wooluf? If such tech really can be miniaturized for ear buds or hearing aids, there’s going to be an arms race to shrink it down! Where is Antman when we need him!
Nice detective work Jim. No, I haven’t seen any such demo for Whisper AI. Note: one system runs on a laptop computer, the other on something that fits in your pocket, so maybe not directly comparable. The obvious comment is that getting it (either of them I guess) to run on a hearing aid might be easier said than done.
Any new player that wants to develop a system that allows a hearing impaired person to hear as well in noise as they do in quiet is very welcome.
Side note: searching for Whisper AI is stupidly difficult now because there’s now something called “Whisper AI” by OpenAI.
Side note 2: Really must try ChatGPT. It scares me a little quite honestly.
Edit: Missed the Sonova reference. Is there a tie-in with Audatic?
LOL You beat me to it! Was also admiring cvkemp’s new avatar! He’s a PERSON!
As for the SPUD? Never! I absolutely ADORE that avatar. I used to do volunteer work for orangutans in Borneo and have a soft spot in my heart for any APE or monkey. So SPUD: don’t you dare change that avatar!
Well sheesh, I thought this was what my Phonak Lumity Life aids were doing more or less already?
<<The network achieves state-of-the-art denoising on a range of human-graded assessments, generalizes across different noise categories and—in contrast to classic beamforming approaches—operates on a single microphone. The system runs in real time on a laptop, suggesting that large-scale deployment on hearing aid chips could be achieved within a few years.>>
So if it takes a LAPTOP to make it work, I seriously wonder if it could be “miniaturized” enough to put it in our aids?
I was in to see my audi on Valentine’s Day, picked up my DEAD, 4-mo old Life aid that’s now been fixed, and while there, had her tweak some settings to improve my own speech comprehension in loud noise. Thanks to so much excellent advice here, I was able to draw her attention to the “speech banana” chart (my own dismal audiogram plotted in the netherworld UNDER that banana), and then delicately suggest we dive in and look at Noise Block and Speech Enhancer.
She made some changes, I detected a nuance of difference, and am now going to test out the new settings for a week or so to see how they perform. I admitted that if I just turn up the volume a couple steps while in “Speech in Loud Noise” I get another sliver of speech comprehension improvement… So I can always go back to that.
For now, I think that boosting the Speech Enhancer makes my aids more prone to FEEDBACK. Likely cuz the high-end frequencies have been dialed up a bit. Not sure I’ll keep that setting.
I’m encouraged to know that research is still ongoing to conquer THE HOLY GRAIL: being able to understand folks in noisy places! Cuz << Non-spatial (single microphone) noise reduction algorithms employed in hearing aids have so far not been able to provide improvements in speech intelligibility.>> I can vouch for that 100%. That is precisely why I was in to see my audi and have her tinker with some settings to get around an annoying tendency of Phonak’s program to turn the volume down on EVERYTHING when it’s in “Speech in Loud Noise”. Yes, even the person I’m facing, Can barely hear them the volume was still TOO LOW.
You can get AI-based noise removal on a phone today: https://heardthat.ai
Demo here: How It Works | HeardThat™ Assistive Hearing App | HeardThat
Disclosure: I am associated with the company that makes HeardThat.
Interesting, but I think that completely escude the noise, as in those examples, is not exactly optimal.
I give a trivial example. I’m in the car with a person who talks to me and an ambulance arrives: with that algorithm, I risk not hearing the ambulance.
Or at the cinema, background noises have their relevance even if they are actually sometimes excessive for us deaf people.
$9.99 a month for my limited usage is a bit much. I’d rather have a metered plan and pay by actual usage with perhaps a modest monthly account retainer fee.
Don’t know whether to wish for a particular outcome or not, but I would think one technology, just as for computer operating systems or Internet search, will eventually dominate. If earbud users also sign up in droves for the AI noise-removal feature, hopefully, any per-user subscription fee is going to come down as the number of subscribers increases, and, hopefully, too, ultimately much of the processing can be built into a phone, a battery-powered purse, or a belt device, if not the HA’s themselves. Most new smartphone chipmakers are putting NPU units on their chips, with Apple helping to lead the way in having processing done locally. If Microsoft can offer a user a ton of features in an MS 365 Business Essentials account, including AI-based voice-to-text transcription, 1 Terabyte of cloud space, etc., for $60 PER YEAR, $9.99 a month (2x more costly) for a single AI-noise removal service seems a bit pricey. But it’s great for consumers to have competitors trying to beat each other on best service deals and technology!
Perhaps that is why Oticon put 2 microphones in each of my More 1 aids.
I could have sworn the hearing aid companies already claimed they solved the speech interpretation problem.
They need to claim that to appear competitive with their rivals who make such claims.
Unfortunately false advertising charges and penalties are non existent and usually profit the lawyers rather than the users.
I tested the app I couldn’t handle the latency in my opinion for this tech to be successful it would have to be implemented on the dsp of the HAs or headphones.