Explainer: Analog vs. Digital vs. Widex PureSound vs. Marketing Fog

Widex recently released their new ‘Allure’ model which comes with an enhanced ‘PureSound’ mode. At the same time there is renewed discussion about analog vs. digital.

These things are related. Let me try and explain and also try to cut through some of Widex’s marketing fog. Quick summary:

  • Widex’s PureSound mode operates like an analog hearing aid under digital supervision. That’s unique in the industry!
  • The big advantage is its super low signal delay, which is particularly beneficial with open fittings
  • PureSound also inherits the disadvantages of analog aids: no noise suppression and no feedback suppression
  • PureSound on Allure HAs seems to support front focus, vs omnidirectional mode with the previous generation HAs. I wonder if that can be turned on and off.
  • Every other Widex mode (even the Music mode) operates like any other digital hearing aid. They might have tweaked things to sound more natural, but there is no magic sauce.

So what’s happening behind the scenes?

Let’s start by looking at sound waves - pretty much everybody has probably seen a representation like this:

This is what sound looks in the ‘time domain’. It shows the amplitude as time passes.

Now there is a fundamental difference how analog hearing aids process sound vs. digital aids. In technical terms:

  • Analog hearing aids keep sound in the ‘time domain’. They use analog filters (resistors and capacitors!) to break the signal up into several frequency ranges, and then have a separate analog amplifier for each frequency range.
  • Digital aids convert sound to the ‘frequency domain’, do amplification and all kinds of processing there, and then convert back to the ‘time domain’. More details below.

First, here is an overview which features can be implemented in the ‘time domain’ (analog HAs) and which features require ‘frequency domain’ processing (digital HAs):

Analog HAs Widex PureSound Digital HAs
Adjust Gain per Channel :white_check_mark: :white_check_mark: :white_check_mark:
Linear Amplification :white_check_mark: :white_check_mark: :white_check_mark:
Compression :grey_question: :white_check_mark: :white_check_mark:
Beamforming / Directionality :white_check_mark: :white_check_mark: :white_check_mark:
Feedback Suppression :negative_squared_cross_mark: :negative_squared_cross_mark: :white_check_mark:
Background Noise Suppression :negative_squared_cross_mark: :negative_squared_cross_mark: :white_check_mark:
Wind Noise Suppression :negative_squared_cross_mark: :negative_squared_cross_mark: :white_check_mark:
Environment Classifier :negative_squared_cross_mark: :negative_squared_cross_mark: :white_check_mark:
Frequency Shifting :negative_squared_cross_mark: :negative_squared_cross_mark: :white_check_mark:
AI Processing :negative_squared_cross_mark: :negative_squared_cross_mark: :white_check_mark:
Zero Delay Processing :white_check_mark: :white_check_mark: :negative_squared_cross_mark:

Now what’s behind converting the signal to the ‘frequency domain’, which is key to all advanced processing?

It’s done via some amazing math (called ‘Fourier Transform’) and the result is that the original sound wave gets broken up into its individual frequency components. The image below illustrates this:

  • On the very left are three individual sine waves with distinct frequencies - think of 3 individual keys pressed on a piano
  • The middle shows the combined signal that a listener would hear
  • The right shows what happens after conversion to the frequency domain!

You can see that in the frequency domain, all three individual sine waves are represented by a bar. Not only that, but the louder the original sine wave was, the taller the corresponding bar!

A digital HA can now play with these bars: make them bigger or shrink them according to prescribed gain settings. It can find characteristic patterns that typically represent noise and subtract those. And much more.

When done, the last piece of the puzzle falls into place. There is more math (called ‘Inverse Fourier Transform’) which allows the HAs to convert the signal back from those bars in the ‘frequency domain’ to the time domain.

So what’s the downside?

Delay! Analog processing has zero delay, whereas all that fourier stuff leads to at least 5ms processing delay. Not because today’s hearing aid CPUs are so slow, but because fourier math is not instantaneous but rather needs to look at a certain time period of the incoming sound signal before it yields any conversion results.


Still, it’s immediately clear how this frequency domain stuff is beneficial for hearing aids: It’s super easy to apply the desired amplification to each frequency component when your signal is already in the frequency domain.

Let’s take a closer look at feedback suppression, which apparently Widex PureSound is not very good at.

That’s no surprise, because feedback suppression works by changing the output frequency slightly. If your incoming signal is at 2000Hz and is at risk of causing feedback, then the HA will move the frequency bar (in the frequency domain) slightly to the right so that the signal is now at 2020Hz. Now the output signal no longer has a 2000Hz component, and the feedback loop is disrupted.

Of course changing the frequency is bad for Music, but that’s the price we have to pay for no feedback.

Since you can only shift the frequency when operating in the frequency domain, analog hearing aids (and Widex’s PureSound mode) can’t do that.

So what does Widex do? In PureSound mode they simply lower the gain so that it’s below the feedback threshold. But that means you are likely not getting the amount of amplification you should be getting. Here is what this looks like for my audiogram:

For higher frequencies my target gain (dotted lines) is above the feedback threshold. PureSound simply reduces the gain to below that threshold:

In fairness, that’s why Widex says PureSound is only for mild to moderate hearing loss.

Why did I mention ‘Marketing Fog’ in the title of this thread? Because I get the feeling that Widex is intentionally blurring the lines a bit.
Nothing they say is wrong. But when they talk about how great PureSound is (it is unique and they deserve credit!) and then they talk about improved feedback suppression without mentioning that it only applies to all other modes except PureSound, then that’s, well… That’s marketing fog to me…

… and I’ve seen that in other Widex threads where people lump PureSound and Widex’s Music program together. They are fundamentally different. Only Puresound has no delay.

One last comment:

It’s actually quite amazing that there is a mathematical method to take any signal and break it up into its original sine wave frequency components. Even more amazing that Mr. Fourier laid the groundwork for this math in 1822 already!

7 Likes

I’d like to add some notes here.

  1. Analog hearing aids do have compression. Technologies such as Input Compression, Output Compression, WDRC (Wide Dynamic Range Compression) are utilized back to 1990s.

  2. Some William Demant Group brands of hearing aids, such as Bernafon and Sonic, do utilize time-domain signal processing. Bernafon called it “ChannelFree” technology back to 2010s, and Sonic called it “Speech Variable Processing” (SVP). Theodore H. Venema explained this in his book, Compression for Clinicians: A Compass for Hearing Aid Fittings as ChannelFree.pdf (266.1 KB). Let me quote here:

In contrast, the “channel-free” technology is said to operate in the “time domain” and does not use an FFT to separate incoming sound into separate frequency channels (Schaub, 2008). Instead, the wideband input is taken as it is, and adjustments in gain are made extremely rapidly over time (Plyler, Reber, Kovach, Galloway, & Humphrey, 2013). The desired output frequency response-based on the hearing loss, the fitting method and any other selected option provided by the fitting software-is programmed into the channel-free hearing aid. A quantized value is assigned to each new input sample over time, in accordance with specific output demands that are placed upon it, so as to achieve the desired output frequency response. Each new sample thus quantized is added to all the other samples that have been previously quantized, in order to constantly update the entire output frequency response over time.

One can think of the channel-free hearing aid as an equalizer operating over time (like that shown in Figure 8-3), where the buttons adjust sound over three dimensions: amplitude, frequency, and sharpness. It enables frequency response shaping by updating the frequency response very rapidly over tiny, serial units of time. This technology would thus enable a “holistic” sculpting of the output frequency response without an apparent channel division.

An advantage for digital hearing aids operating in the time domain is that they present with comparatively very little processing time delay; thus, the gain added to the input produces a minimum of distortion to the output temporal waveform envelope. Furthermore, with channel-free processing, there is less spectral distortion that can occur between adjacent channels (Plyler et al., 2013).

Both channel-free and the more typical multichannel fastacting WDRC (see Chapter 7) digital hearing aids have been compared for subjective preference and also for objective speech recognition. For subjects with no previous hearing aid experience, Plyler et al. (2013) found no significant differences in subject performance with channel-free versus a seven-channel WDRC hearing aid, on the Hearing in Noise Test (HINT) and the Abbreviated Profile of Hearing Aid Benefit (APHAB). Interestingly, individual subjects did have definite preferences for either the channel-free or for the seven-channel WDRC hearing aid. Plyler, Hedrick, Rinehart, and Tripp (2015) compared performance among experienced hearing aid wearers with channel-free versus the same seven-channel WDRC hearing aid. No statistical difference in consonant recognition in quiet and in noisy listening conditions was found between the two methods of DSP. The investigators also found no significant difference in subjective sound quality preference between the two DSP schemes. A third finding of their study was that previous experience wearing hearing aids did not seem to play a part in the objective performance or subjective preference findings.

In summary, there are always trade-offs to be considered when engineering new digital hearing aids. Again, it behooves clinicians to listen to digital products with their own ears and compare sound quality among digital hearing aids before automatically adhering to the claims of the manufacturers who build them. The high-end digital hearing aid from any one specific manufacturer may not necessarily offer the best sound quality. Sometimes, the simpler products, with fewer frequency bands and fewer bells and whistles, in fact can sound quite good! Usually, good old straight linear gain (when not distorted by peak clipping) can also sound quite clean and clear.

3 Likes

Thanks - that’s interesting technology from before I started paying attention to HAs.

I did put a question mark in my overview table regarding compression with analog HAs. I suspected that it would be possible to do, but probably not with the same accuracy that today’s HAs can achieve.

Analog beamforming is another feature I’m not 100% sure of. It’s clearly possible to do it in the time domain, but can it be done with purely analog technology? It might require at least a CPU for A/D conversion and then figuring out the signal delay between the two HA microphones.

1 Like

Based on my experience in DSP as applied to control and communications engineering, our preferred approach is normally time domain processing which will almost always have lower latency than FFT-iFFT processing. In fact I had assumed most HAs were doing just that, until I saw data on the amount of delay they encounter. PureSound must be time domain.

Also please note that ALL filters induce some delay. Analog, digital, audio, RF, optical… doesn’t matter, this is basic physics. But different implementations may add excess delay, for instance the whole A/D, FFT, iFFT, D/A pipeline.

Fully arbitrary FIR filters can implement any imaginable amplitude and phase response, with a cost of some added delay with increasing complexity. So that’s an obvious way to create a “channel-free” equalization. There are others…

Compression (with frequency dependence) has been implemented in audio companders at least since the 1960s, e.g. dbx or Dolby. It’s not hard to write a compressor that works in the time domain. Again, PureSound is likely doing that.

As for beam forming, that relies on the fixed spatial relationship between mics, which is then a fixed time delay. A delay filter (allpass) can be implemented to allow summation of two or more signals with fixed delays that will result in a wide variety of tunable patterns such as cardioid. Once again, that can be done in either the time or frequency domains.

I continue to be impressed and amazed with the advances in DSP in our HAs. Lots of PhDs are hard at work worldwide on this. Lucky for us!

1 Like

Thanks @Gary_NA6O for the additional information!

When it comes to noise reduction in HAs, I read a paper where they say that there was no satisfactory method to do this in the time domain. And once you are forced to go to the frequency domain (and incur the associated delay), then you might as well do everything there. It’s much easier.

I agree that engineers have been hard at work and that’s great.

However, what I initially didn’t realize is that hearing aid chips are far less powerful compared to state of the art consumer earbuds.

The reason is that we expect HAs to last a full day vs. 4-6 hours for earbuds. HAs are also much smaller, meaning they have batteries with less capacity. The net result is that earbuds are allowed to be approx. 10x as power-hungry as HAs. To achieve their low power consumption, HAs can only have less powerful chips.