Bringing back analog hearing aids

I am the original poster on this, and looking over the replies I’m noticing two consistent issues here. One is that I’m seeing pushback on the very idea of reintroducing analog hearing aids, as though these were to be forced upon every user. How would that even happen? The only real question at issue is whether the industry will allow room for anything other than digital hegemony. At the moment, the answer is no, and I’ve argued elsewhere that this was not the result of market choices.

The second issue is what Scott Alexander has called isolated demands for rigor – i.e., any claims for analog are being subjected to a degree of scrutiny that claims for digital avoid, evidently because digital already owns the market. Again, there is no reason why the industry as a whole has to choice either way. I would like it if patients had the change to choose again (myself included).

Agree that if you want to give up digital benefits for a perception of better sound, you should be able to do that. But, misleading statements and exaggerated claims should be challenged.

1 Like

I am not pushing back to the idea. I am however trying to point out that digital hearing aids for the most part are not inferior. They are better. If I was an engineer in the hearing aid development lab, and someone came to me and wanted to force me back into analog, I would just go to another company to work. No time for going backwards in technology. However, if someone came to me and could actually show me that analog hearing aids have some measurable benefits, and requested that I improve the digital to match or beat the analogs, that would be a great challenge and I would welcome it.

Saying that digital hearing aids are losing sound and there are holes in the sound produced in modern state of the art digital hearing aids just does not make any technical sense.

Not really. Digital features were researched for benefit as they came out. If you dig into pubmed, you’ll find lots of stuff on WDRC, prescriptive targets, directional microphones, etc. The move from analogue to digital was primarily driven by increased benefit, which is the main goal of both audiologists and researchers in the area.

As for

Unfortunately, as hearing loss progresses it cannot. It is certainly the case that the normal-hearing brain detects sub msec latencies. It is also the case that hearing loss goes hand in hand with a loss of both temporal and spectral acuity. If you’re just engaged in engineering without any consideration for neuroscience, you’re missing a big piece.

4 Likes

Sierra - the argument is not that digital is necessarily inferior, but that it is inferior for certain individuals with certain loss profiles. The measurable benefit for someone like me would be this: I have profound, congenital, and very linear hearing loss, with no dead zones. Even the most linearly programmed digital aids feature too much compression for me. Almost no digital aids have enough headroom, which results in clipping, particularly when I listen to music. Moreover, I can hear the difference in sound quality in terms of tone, timbre, resonance, decay, etc. There is an emotional connection to the world around me that is severed with digital. And rest assured this is not the result of any prior bias regarding digital aids.

The best analog aids, of the kind that were readily available in the 90s and first decade of the 21st century didn’t have this problem for me and for many other users. I’m not sure how I can respond if you say the science claims there’s no detectable loss in digital sampling when my phenomenal experience says otherwise. And given the relatively low bit rates and sampling rates (based around the levels of human speech, rather than the much wider dynamic range of the larger world around us), why should my experience be surprising?

I am not at all insisting that no digital aid could ever prove satisfying to folks like me – I know that I would welcome it, so feel free to take that challenge.

Neville - I’ve dug into the publications, and while I remain impressed at the ingenuity of the designers, I’m less impressed by the results of the trials, which strike me as too ambiguous to have justified the total elimination of analog devices. This move (as opposed to just introducing digital alongside analogs) may have been a matter of increased benefit, but the benefits did not necessarily accrue to the patients.

And while your claim about latency detection may be true for progressive loss, what about static loss? My loss has not changed since I was very young, and I frequently notice the latency problem with digital. I agree about the importance of neuroscience here, but everything I’ve read points to the fact that the brain can consistently respond to distinctions in timing and frequency even when the ears cannot.

Here is an example of a digital hearing aid NAL-NL2 prescription curve for a fairly uniform hearing loss across the frequency spectrum. Note that there are three curves for the gain to be applied. The top curve (lighter red or blue) is the gain for soft sounds, the middle more bolder curve is for normal level sounds, and the bottom curve is for loud sounds. This prescription has a fair amount of compression which is measured by the difference between the top curve and the bottom curve. The reason this is done is because for most soft hearing is lost more than loud hearing, and the compression is an attempt to correct that.

Now look at this prescription curve for the same loss and same digital hearing aid but using the 1/3 gain formula. Note that there is no upper curve or lower curve, just one curve. You can’t get more linear than that. There is ZERO COMPRESSION! Most digital hearing aids have this option. You just have to ask for it. Most don’t like it however as loud sounds can be too loud, and soft sounds can be lost. But, it is there and can be used if that is what you want.

I think we thrashed the headroom issue to death, but if you want 116 dB of headroom have a look at the ReSound Quattro. I believe that is how much they claim. Another trick is to simply put tape over the hearing aid mics to bring the live dB level to within the range of the hearing aid. But keep in mind there is no point of giving the analog microphones more dB than they can handle which is probably no more than 120 dB, an almost insignificant level above 116 dB.

1 Like

I have only been loosely following the discussion of analog vs. digital but it seems to me to be looking forward to enjoy 120 dB sounds is aspiring to further hearing loss - like we all loved super loud rock concerts when we were young 'uns and now we’re paying for it (or have already paid for it!).

It seems like every few months that another thread pops up like this bemoaning the loss of analog hearing aids. I think the likelihood of anybody changing their minds is nil. To flip it around: to digital fans: What would it take you to think trying an analog hearing aid was worthwhile? And to analog fans: What would it take for you to consider trying digital again?

1 Like

In defense of the analog folk, I suspect normal sounds in life are louder than we think. I see from my Smart Direct app that peak was 104 dB today. Yesterday it was 105. Did nothing out of the ordinary. Listening to live acoustic music like a piano or guitar probably is higher than that.

Frequency response in older analog was pretty short…

I think 3.7k or so before dramatic roll off. What is it these days?

All it would take for me to try an analog “hearing aid” would be if it could:

  1. Compress a full range of sound into the dynamic range I have left, and make it sound normal,
  2. Have separate gain settings for soft sounds, medium sounds, and loud sounds,
  3. Move sounds from higher frequencies where i have profound loss and dead spots down into lower frequencies where i can hear better, making it sound normal, and making music sound good,
  4. Have noise reduction that almost shuts off machines and HVAC equipment, allowing me to hear speech in noise,
  5. Have directional settings that allow me to focus in on the nearby voices i want to hear, reducing the voices i don’t want to hear,
  6. Have accessory devices that let me take phone calls from multiple phones and stream music and TV directly to my hearing aids, increasing my comprehension.

That’s all it would take.

4 Likes

Right. For example, it might just be the snare drum that’s over the line, but then you’re still experiencing clipping every fourth bar on a normal radio song.

Good question! For what it’s worth, I have no desire to push analog devices on anyone who is happy with their digitals, nor do I mind the existence of digitals at all. But those of us who prefer analog, we’ve been out of luck for some time now. Speaking personally, if I had a pair of digital aids that sounded like the best pair of analogs that were readily available in the 90s and first decade of this century (in my case, those were Starkey ITEs), that didn’t produce clipping or sound overly compressed and could produce the fullness of musical sound, I would be satisfied.

And important caveat is that the “rightness” of digital aids would have to be sustained over a long period of time, without resulting in fatigue (at best) and headaches (at worst). I don’t, for what it’s worth, think this is an impossible feat, I just think that we are not close to that point given where the current tech is at for hearing aids. Hence my interest in just going back to the tech that was available even 10 years ago. But again, I would just like to see it made available for those of us who want them. I’m not trying to start a revolution here.

Actually they do,

at least from what I understand.

It is received as analog, converted to digital, enhanced, compressed, or retarded in specific bands or channels, reassembled, then converted back to analog.

Maybe you could check out Smart_Analog and see what he has.

Simple question. How is your hearing today, compared to what it was in the 90’s?

Fair question. My hearing has remained constant since the mid-90s or so. I have very severe congenital loss, but not it’s quite linear-- e.g., with sufficient gain I can hear and distinguish every note of a piano.

Also I have used analogs within the past decade, so I’m not comparing to way back here. It’s just that it’s now became very difficult to get fitted analogs. Because of the severity of my loss, I do prefer to work with an audiologist when possible.

I guess if you’re disparaging the ADC/DAC process itself by saying “chopping things up”, that would be accurate. I thought you were referring to splitting up the signal into bands, which both analog and digital HA’s do.

But I haven’t seen any evidence that people sense a difference from the digitization itself. I think it has more to do with the kind of filtering done in the DSP than the digitization. My understanding is that analog filtering on the input and the output can remove all the artifacts of digitization, and it won’t be any difference than the analog filtering on its own. (Assuming, for the sake of argument, that you’re just piping the ADC straight into the DAC.)

Sorry, let me clarify my initial statement. In general, the worse your (sensorineural) hearing loss is, the worse your spectral and temporal acuity is (with some sort of statistical curve of normal variation). I didn’t mean “progressive loss” as you might see in an individual, I just meant the worse the hearing loss is, the greater effects you can expect. The brain cannot respond to timing or frequency information that isn’t there, isn’t coded in the first place, and when the ear becomes damaged you lose some of that at the level of the ear, and then you lose some as the upstream neurons that were previously innervated by the now damaged cells either atrophy or become subsumed by other functions.

Now, that being said, early-aided congenital loss may wire the brain up differently. From a mechanistic perspective, I wouldn’t expect this to perfectly overcome the basic functional limitations, but you might expect something further up to work differently than someone with the same loss who developed through their critical period with normal hearing. There’s some good digging-in to be done there but it’s way too late for me to be up and I’d need to come back to it (I’m sleepy enough that maybe none of this will mean anything when I look at it again). Congenital hearing loss patients certainly seem to perform better than my adult-onset loss patients with similar degrees of loss, but maybe that’s actually because, on average, people whove been in hearing aids since childhood wear them significantly louder.

No disparaging of anything…

but the processing delay is evident, the patient just doesn’t understand it.

People most often describe it as an “echo” in their own voice which is attributed to lower frequencies vibrating through the bones of the skull stimulating the ear immediately, the, 8 to 15 ms later, the signal coming through the aid.

The other places where the processing delay is problematic, especially with open fit RICs is in highly reverbaratory environments like church, or in highly complex noisy places like restaurants etc.

A healthy brain can sort this out with time and practice, but those who have needed hearing aids for 8 to 15 years before taking action have a much harder time attaining the hoped for result because of the loss of acuity in the hearing system.

1 Like

OK, well, the delay then. To be pedantic, it’s not about being chopped up as much as delayed.

Own voice is an issue because you’re generating the sound. Singing, playing an instrument, sometimes speaking get dicey when the latency goes beyond 5-7ms, which it does with digital HA’s.

With closed domes, external sound latency wouldn’t be an issue because there’s no intimate referent. Lip sync latency doesn’t become a problem until about 45ms according to the ATSC, but that could be exceeded by certain kinds of signal processing.

So yes, there may be latency issues. Most people adapt to them.

As a testimonial, using my Marvel M70-R’s with moderate processing on top of the AirStream protocol from the TV Connector, I can’t detect any lip sync issue with my TV viewing.