Dynamic Range Compression and Noise video explaining challenges of HAs & alternative solution for > performance in noise

One of the lead researcher who himself wears HAs presents both auditory examples of HA challenges of speech in noise; the mathematical modeling that shows why and when current HAs processing will degrade the output; and possible solution using multiple microphones and applying compression selectively to different parts of the input. Note in one image toward end of video one of their lab test conditions uses 16 mics on eyeglasses. A key aspect of their approach is to approach sound processing similarly to studio recording post processing where each instrument or voice is processed separately and then mixed; whereas much of HA processing e.g. compression occurs after all of the input has been mixed which mathematically makes it impossible to for HAs to improve speech in noise when three fairly common real world situations occur.

While I expect it might be a long while before such research has positive practical outcome for me/us as current HA users, it is nice to see the level of analysis that some labs (this from U. of Illinois) are applying to the challenges. Another is U of Purdue’s (Joshua Alexander et al) research on frequency lowering which already has practical input for applying settings.

See https://publish.illinois.edu/augmentedlistening/dynamic-range-compression-and-noise/

{Personally as I am about to embark on some self programming the above video presentation made it clear why with many things about audio and HAs that there is no free lunch. Tweaking settings might improve certain aspects of sound quality but simultaneously degrade other aspects which is one of the reasons we have multiple programs with different settings; but the bad news is that within a given program perhaps especially speech in noise and in loud noise that the downsides of tweaks can often outweigh the upsides.}

4 Likes

With the caveat that I haven’t yet read the material you posted . . . independent post processing and remix is what Signia is claiming with their new AX platform.

@mingus Thanks for input. Article and video suggest that optimal would be independent PRE processing and my guess is that it’d be “applied” to analysis of sound information from multiple microphones, and then mixing. But most any approach that is more like a scalpel holds promise compared to a chef’s knife.

I’ve just skimmed, but I tend to agree that pre processing is a good approach. I would include in this the use of auxillary microphones that are closer to the speaker.

1 Like

Actually, I think I may have mis-spoke. From Signia’s description, it processes the incoming signals independently and in parallel, and then remixes on the back end. Seems that would mean “pre” processing, not “post”.

1 Like

Agree, the entire Phonak Roger approach I expect can be quite helpful for improved SNR.