Concerns about the noise problem

But even so, if a HA is only about the equivalent of a 1990’s Pentium Pro, if HA signal-in-noise processing could be off-loaded to a nearby phone (processing power now approaching a low-to-mid level laptop), one would think that a better noise reduction/signal enhancement could be achieved than with a measly old HA processor-and a phone battery is considerably bigger to be able to support the additional computational effort. I wonder if all that off-loaded processing did was to compute the best algorithm to use over the next few milliseconds, perhaps that would improve HA performance because (ignorance showing here), the HA has to figure out what is the best on-board program it should be using, then switch to and use that program. Whereas maybe another wearable could figure out for the HA a lot faster, switch to program #3, dummy! Similar, maybe a connected device listening to the same sounds could figure out the best amount of noise reduction, frequency shifting, or whatever to fine-tune the program it picked for the HA’s and instruct the HA’s to follow suit and similarly tweak the use of its onboard program in the HA’s present environment. Perhaps HA’s already try to do this with their limited processing power - and that’s where all the “60% improvements…” come from - HA’s have need of help and could do it even better if more were available?

No. I feel hearing speech in noise has greatly improved. I trialed the Costco KS8 (Signia Nx 312) and while it needed some tweaks to get there, it was very good in noise. I settled on, for now, the Phonak Brio 3 (Phonak Audeo B90) and it is very good right out of the box. I have had some tweaks and now I’m not missing much at all in difficult situations. They are both miles ahead of my backup Resound Verso.

1 Like

I’ve been wearing hearing aids since I was 6…some 40some years ago. Hearing aid advances are speeding up in rate all the time.From 1976 to 2006…I wore analog aids. No real changes to be quite honest. The Phonak Savia 311dsz was a huge jump. Then the Oticon with the neck loop streamer pro 1.3 was another huge jump. I have spent the past 6 months trying out several brands and models. The OPN and the Linx3D are better still. For me…Signia sucked and would be bested by the 11 year old Phonak. Tomorrow, I go to the Quattro. I anticipate that that will hold me for a few years.

By the way…you aren’t going to get the top of the line for $50 from Wal-Mart. The over the counter stuff that will be available will be several iterations behind the current top end. You aren’t owed anything. The big ‘cartel’ isn’t going away…you had better hope it doesn’t. Research, miniturization, efficiency, more bells and whistles…it costs. That R&D must pay. So the bleeding edge is going to cost. That is the way it is. As the iDevice generations blow out their ears in great numbers, prices may drop due to volume.

100 years ago, you had a horn…

1 Like

Yes, glucas, I agree with you. My objectives when I buy new HAs (about every 6 years) is about 90% speech clarity in noise (e.g., restaurant scenario) and about 10% everything else.

But I understand that others have different objectives. A while back, I was flamed on these boards for jokingly criticizing press releases from for new HA models from two manufactures, which touted lots of new capabilities and features but didn’t mention any improvement in speech in noise capabilities.

2 Likes

Would need to be some awfully nice bells and whistles to upgrade for those alone.

It all depends on the individual does it not?

Sure, just wanted to clarify. The issue I think is not whether or not there have been improvements in the last 20 years or so. Certainly there have been in yours and my lifetimes. I have a similar history to yours, I first started wearing analog hearing aids in 1978 and I agree that there have been seismic changes since then, particularly from around 1997 (Sonoforte DAZ Autozoom - introduction of directionality), followed by digital aids, followed by bluetooth, yadda yaddda.

I think the focus of my post is on the last 2 years, in-particular on the marketing and product launch spiel that we see. Most of it is heavily focused on mobile phone connectivity, rechargeability and increasingly - apps and at-home audiometry. All well and good - these are very important and have their place, but I would like to know which companies are making the most ground with dealing with the noise problem and in that respect there has been very little news, which is a shame, as there has been some great reports coming out from aids like the Bernafon Zerenas, the Resound Linx 3D. But there are no metrics, hardly any literature; the focus is on how many x hours can be streamed on a single charge, and the dynamic range on an aid.

I think my beef is more about some of the marketing messages coming out from the HA industry, than about what is actually going on in terms of improving. Phonak spent almost all of it’s entire campaign last year banging on about rechargeables because of some marketing survey that said that it was one of the features that would attract newbie users. Of course, if Phonak makes money, it gets ploughed back into R&D so it ends up being a good thing. But I can’t help but feel frustrated.

4 Likes

https://www.audiotelligence.com/news/audiotelligence-raises/

I understand the original poster’s complaint that perhaps not enough progress is being made on true improvements in hearing vs. bells/whistles. That said, I’m not sure I agree that the only real improvement in the last several years has been OPN.

I actually feel that Widex has introduced some major improvements with the Evoke platform - both in terms of it’s own programming/algorithms for sound as well with the dynamic ability for me as a user to tailor my own preferred sound in every environment, something I used to have to rely on visits to the audiologist for adjustments while attempting to describe for him what was happening (i.e. "sound too tinny/crunchy in a restaurant). These visits were obviously well after the fact of when I needed the adjustment and then relied on my audiologist’s best interpretation of what I was saying and how to adjust to my satisfaction. . .I love being able to tune the sound on-the-fly, but I recognize I’m a more involved user than many folks who just want the device to work perfectly and automatically in every situation.

Chris

2 Likes

Great news! Thanks for the heads up.

Same here. I self program. I recently noticed, after 5 years of wearing Phonaks everyday all day, that sibilants were harder to hear. I go in about once ever six months and Fine Tune the settings. This helps each time. Today I noticed the wind rustling in the trees. Nice.

2 Likes

In-situ audiometry as done through the hearing aid software is just a way to determine whether you are actually hearing what the hearing aid is outputting. It can give you some insight about how far off the hearing aid is, given a particular coupling, from a proper test (although that comparison is confounded when a user does their own test, as their own definition of their thresholds tends to diverge from the established definition of a hearing threshold). In-situ audiometry does not speak to whether the hearing aid gain at your eardrum is what is it supposed to be relative to established independent prescriptive targets. We already know that manufacturer first-fit output is generally off of target.

2 Likes

Yeah. It’s all crap.

1 Like

The folks who make hearing aids are in the business to make money. Helping people hear better is just a side effect. Right now, things like phone connectivity, direct streaming are popular and make a good profit. They are still doing research on improving intelligibility in noise, results are just not quite here yet. Whoever comes up with the first really effective device will make a killing for a few years. Having software on one’s phone do all the actual processing while the hearing aid is just an input output device is an interesting concept. In that instance, the real magic will be performed by software which might be like a genie. Once released, it might not be possible to get it back in the bottle. I am sure the hearing aid makers won’t want all those R&D dollars flying out the window because of program sharing by users. As for me, I have yet to find an aid that actually lets me understand human speech, even in quiet settings. I am quite content being hard of hearing in most instances except when trying to communicate with people I can’t understand. There might be aids out there that would work for me, but i don’t have the resources to try a bunch of different ones. I currently have Phonak Audeo V90 and B50 aids, neither of which are effective.

John, I am confused. You should be able to hear very clearly in quiet settings in spite of your loss, which is moderate to severe/profound, with well fitted hearing aids. You don’t even need top of the range aids. Any decent mid range aid from any of the 6 major manufacturers would do the job. That said, you have good phonaks. Are you sure the programming is good? Have you had an REM test. Do you use soundrecover/frequency lowering?

2 Likes

I don’t think it’s feasible to off load the processing down to the phone then upload it back to the hearing aid because there’d be too much latency for the data transfer back and forth in real time. Furthermore, hearing aids must be standi alone and mfgs can’t expect everyone to have a smart phone to handle the processing in the first place.

John, I find it strange how you are struggling with your good hearing aids.

I wear top of the range Phonaks and hear very well. My loss is also worse then yours.

Are you sure you are programming them right? I would get a REM test preformed.

1 Like

If I remember correctly, he has a lot of problems being able to tolerate the hearing aids. Not finding them much help and finding them intolerable is a vicious circle.

2 Likes

Heathy hearing solve coctail party problem by brain controlling outer hair cells to amplify/mute sounds your brain wants/dont want to hear.
In other words if you want hearing aids solve coctail party problem you would need them connect to brain which is not possible with current technology. Also i think you will need more sensitive microphones.
There is some hope for near future in machine learning/Ai which should be possible to only amplify human voices. Researchers alredy do that but for now procesors to fit hearing aids are not powerful enough

1 Like

Don’t know if the sound quality of aptX low-latency is good enough for high-quality HA sounds but it’s a BT compression standard owned or originated by Qualcomm that has only a 32 ms latency. aptX - Wikipedia

An additional thought is that possibly in a re-envisioned world that the HA’s need only be high-quality speakers in your ears. And maybe they could even be (lightly) wired, like earbuds if that conquered the latency (and also battery life problems). Something like Bose Hearphones…. Maybe for most except the very active, the HA battery and processor and microphones could actually be in a necklace that you wore around your neck. You could have a much bigger battery and much faster processor and the necklace could contain an array of microphones all-around, front, sides, and back.

Also (back to the phone as a processor), if the phone could both listen and process speech and the communication with the HA’s were simply one-way - playing the phone sound to your HA’s only, that would cut the latency issues in half (only going phone to ears, not back-and-forth) and maybe aptX low latency would be good enough but you’d lose the advantage in microphone placement that evolution has given your ears. But according to previous Wikipedia “research” that I did, humans can actually make do with an audio latency of up to 125 ms before things like lip sync issues start to kick in. Qualcomm unveils aptX Adaptive to provide high audio quality with low latency And see Abram Bailey’s original post at the top of the linked thread. The Wikipedia article on aptX does not have any info on aptX Adaptive, Qualcomm’s latest Hi-Fi codec version, the point of Bailey’s post.

According to this Engadget article, aptX Adaptive is simply the fusion of Qualcomm’s aptX HD and aptX Low Latency standards: aptX Adaptive Bluetooth audio delivers low latency and high quality (and I think aptX Adaptive requires BT 5.0)

Glucas, I agree. I should be able to hear clearly with the aids i have. I read here frequently where people have gotten aids and are thrilled with the performance. I envy them their success. I think perhaps part of my problem is that I have lived with my condition for so long, it might not be possible for anything to be done. The Audiologist who performed my hearing test, the results of which are posted, told me she didn’t think I could really be helped. She said to forget about anything above about 3 KHz and try to get at least some hearing back in the 2 to 3 KHz range. The old standard for telephony for understanding voices was 300 Hz to 3.3 KHz. I have lowered the gain of my aids above 3 KHz, but couldn’t tell any difference. And, yes, I have had a professionally fitted set of aids. Returned them because they didn’t help.