The ability to hear speech in background noise has been described as the holy grail of the hearing aid industry. Each company has its own technology to deal with this and its own set of promises about what users/customers can expect.
But here’s the problem. What about when the background noise IS speech?
Believe me, that IS the case. Background noise is going to be a minestrone soup of ambient noises and lots of BLABBERING people - now at higher volumes as everyone struggles to be heard.
My own daydream is for aids to somehow tune out sound outside of the human speech frequencies - or I suppose some AI algorithm could identify speech by pattern and boost that?
Normal aids arent’ there yet, but there are accessories like the Phonak Roger V2 that help stream audio to one’s aids. It’s just that even these accessories often pick up the very background noises we’re trying to get rid of!
This is why I dislike Phonak’s directional take on speech in noise, as it often tries to mute the voice I’m trying to hear. Not every person in a group conversation is directly in front of you. I understand some deal with it via the premise that background noise (voices or otherwise), is often more distant and lower in frequency.
I think that Signia is close, their newest IX platform does just that, and previous AX platform added control of how much to amplify nearby sounds versus the ones further from you (this function helps me a lot).
They detect your own voice too, to process your voice separately and to add one more point of conversation detection.
Hearing speech in background noise is definitely the worst case scenario for all hearing aids. Based on my own personal experience, hearing speech in noise actually falls into two different categories. One is worse than the other.
Category 1: Speech in random noise - Most new hearing aids deal with this situation well. This is where you are trying to hear speech and the loud background noise is machine made or random. Loud fans, on plane, crowd noise from a very large gathering. Sound of the wind/ocean at the beach. traffic, etc.
Cateogry 2: Speech in noisy place with many nearby conversations - This is literally the worse case scenario for hearing aids and where we all suffer the most. You are in a very crowded and noisy restaurant. You are trying to talk to the person in front of you at the table and there is a very loud person talking at the table next to you and another table behind you. The hearing aids aren’t sure which conversation is the dominant one (i.e. the one you want to listen to). Phonak’s old approach with Stereozoom was to only focus on the person you were look at. The newer hearing aids form Phonak and others are 360 deg aware and they get mixed up if the people at the next table are louder than the person you are talking to. This is the harder situation to deal with.
I blamed my hearing aids for 2 years of use. Phonak Audeo Paradise P90R’s.
It wasn’t the hearing aids; the setup was at fault. My audiologist didn’t know how to set up those hearing aids.
JordanK you helped me so much. I chose to go to a hearing aid practitioner with a business named “Hearing Well Matters” in Burlington.
I asked him what was wrong.
He said my Paradise P90 old setup had the wrong domes specified. They were setup for “open domes”. I had used "closed domes for at least six months, and several setup visits to the audiologist. He had used REM 3 times in that period.
He said that the communication between the Left and Right hearing aids was not toggled on. So hearing aid features didn’t work.
I should add that the same audiogram was used. It was new. It showed a magical improvement in how well I heard. first time in 20 years of HA use and audiograms being used.
What I know is that they work far better now. My wife said “difference is night and day…and…What took you so long?”
I apologize. It’s been a horrible experience that has lasted two years.
My take-away is that my hearing aids are good. And getting better.
JordanK thank you so much. I appreciate your help and your posts.
Yes, Phonak is definitely moving this forward. In addition, Starkey is doing some interesting things by combining the use of directional microphones with noise reduction technology and AI to scan the environment for different kinds of noise (speech vs other).
As I understand it, the original question posed is “What if the background noise IS speech itself?” Meaning that if you have multiple sources of speech all around you, how would the hearing aids be able to know which speech you want to hear and which ones you want to filter out?
Phonak’s Speech Sensor simply detects where the speech sources are and opens up the beam forming to where those speech sources are accordingly. The screenshot below from Phonak shows that Speech Sensor does. The gray area is the beam forming field.
So if you take the middle scenario below with the 4 speech sources, Phonak simply just opens up the whole field to let you hear all 4 speech sources, but Phonak doesn’t really know which ones are “noise” to you and which ones you WANT to focus on. So the Phonak Speech Sensor is not the solution to help you filter out which speech sources are not desired and which source is desired.
In the same vein, Oticon’s speech detector does the same thing. It simply opens up the field to let you hear any of the speeches it detects, and it’s up to YOU and your brain hearing to do the filtering. No hearing aids, even with any kind of AI, can read your mind to know which of the speech sources you want to focus on.
The best you can do in that situation is to simply disable Speech Sensor for Phonak and go to a program that forces it to do simple and full frontal beam forming and you’d have to turn facing toward that person. With Oticon, you can also set a program set with Full Directional in its Directionality Settings to do the same thing. It also allows that via the Sound Booster option that your can turn on in the app.
It’s really best to learn to develop your brain hearing to be able to differentiate between the speech sources and learn to focus on the one you want to hear and ignore the others. If it’s just general babble (not intelligibly clear speech sounds), then the HAs may recognize that as noise and try to suppress it for you somehow. But my understanding of the OP’s original question is about suppressing clear speech sources, not just general unrecognizable babbles.
Whatever happened to hearing aids that you put in your ears and they worked?
my first HA’s were that good because of the audiologist that provided them and set them up.
After every visit I heard better. Got them in Sept 2014. Thanks to Lydia Kreuk!
My third set of Phonaks are pretty darn good. Now that they are set up. Thanks to “Hearing Well Matters”
edit: The Phonak Audeo Paradise P90R’s were absolutely horrible in a meeting session. Large table. 12 chairs occupied around the rectangular table. I couldn’t understand talk at all.
Now that the HA’s are set up better I’ll be paying attention to meetings like the one I describe.
edit: I believe it should be easy to prevent setup mistakes like the ones I had to endure for 2 years of use.
I believe that it should be extremely easy to move from one hearing aid model to another when it’s time to upgrade. It shouldn’t be stressful! It shouldn’t compromise the hearing we have; it should only improve performance, not degrade it.
Why get new hearing aids if they don’t work well?
Let’s challenge the industry to help us hear speech and music better in background noise.
I notice that there are some Whisper.ai alumni on this thread. I remember reading about or hearing (on YouTube) Andrew Song of Whisper saying that Whisper had developed AI algorithms that would COMPLETELY eliminate background noise. But, IIRC, he said that such an algorithm was too disorienting for a user.
It seems to me that AI algorithms is the direction that HA manufacturers are likely to go in order to find the speech-in-speech-noise holy grail. I wonder what has happened to Whisper’s technology.
I have Oticon Xceed 1, and when I was on vacation and had lunch at a restaurant - I felt that my hearing aids reduced the range of the microphones and only worked in the area around my desk. When I got up from the table for the next dish or drink, I heard everything that was happening in the entire hall.
Yes, Volusiano, you posed the question as I intended, i.e. speech when the background noise is other speech.
With regard to brain hearing, what happens when your brain is old and not doing as good a job as it used to - as is the case with all parts of an aging body. Most users of hearing aids are in fact older,
I hear what you’re saying, @billgem . For old folks who still have very good hearing, I guess they have the advantage of their brain hearing continually staying sharp and in good shape, so it’s less effort for them to use their brain hearing as they get older. But for old folks who become hard of hearing earlier on, their hearing loss has caused their brain hearing to be not as sharp anymore, so even with good hearing aids, they will still have to retrain their brain hearing, or just rely on heavy directional beam forming to zoom in the focus toward their front.
I was just thinking, wouldn’t it be cool if HA companies are advanced enough to pair HAs with special eye glasses that can create auto caption of what people say in real time to aid in our understanding of what people are saying in front or even behind you? Just like watching a movie with subtitle. I think there’d be tremendous value in it! It’d even be better if they integrate both the HAs and the eye glasses into the same wearable.
Signia is doing this already, AX platform added two processors for splitting background and foreground streams. And IX is adding multiple tracking beans for nearby voices.
@Reginald, I’m not sure I understand. In another thread you said,
“The beamforming tech needs to be tested/reviewed first, then we’ll see. LE Audio is an industry standard that they’re adopting, the hardware actually remains unchanged from the Previous AX platform [in the case of RICs]. Since it’s all software and it isn’t all in yet even, I remain unimpressed.
It feels like a very forced marketing move.”
Did you mean that only the hardware that supports LE Audio is unchanged from the previous platform? Or that the entire HA hardware is unchanged from the AX platform and the hardware is just being utilized differently via the IX platform? It would seem unusual for a manufacturer to introduce a new platform but not upgrade the hardware. I’ sure I must be missing something obvious. Thanks for responding and clarifying.
I have the Philips 9040, having come from Phonak with the KS10. The first time I was in a restaurant there was loud speech in the background at a table directly behind us. I was annoyed that I could hear too much of the conversation behind me. Never had that with the KS10 because the KS10 shut down the stuff behind me. I can switch programs to make the Philips more like the the KS10. But I prefer to get used to hearing all the sounds, so I’m leaving them in the “General” program and re-learning how to focus on what I want to listen to.