16 vs 24 bit converters and HA Audio quality,

I like this third paper/link (written by Widex people) that you shared a lot. That’s because it addresses the dynamic range issue specific to the HAs. But it’s interesting to note that they talk about shifting the operating range of their 16-bit A/D converter and not really using the obvious solution of just going to a 24-bit A/D converter. But that’s probably due to their platform/design limitations and they’re stuck with finding workarounds for the 16-bit design. Other HAs like the Oticon OPN has 24-bit processing and is able to get 114 dB SPL on the input side without having to do any kind of workaround like operating range shifting.

But again, I still don’t see any conflict between all 3 papers. This Widex paper focuses the dynamic range advantage to avoid distortion at the INPUT stage of the HAs due to very loud sounds, like at a live music environment. That’s just another angle and another advantage of having 24-bit A/D converter and processing. But that’s a different issue entirely than saying that 16-bit processing is not good enough for music compared to 24-bit and human can tell the difference.

What I’m trying to say is that the headroom issues that a 24-bit system solves (more noise, distortion at high input level, speech in noise due to distortion, etc), while real, does not make 16-bit processed sound have a discernible lower sound quality than 24-bit processed sound that a normal person can tell. For most normal hearing situations, they’ll sound the same, quality wise. If you play music from your sound system or watch TV at a reasonable listening level, I don’t think you can tell the difference between 16-bit and 24-bit HAs.

Now if you go to a concert or listen to live music, then 24-bit HAs will sound better for sure because they can take in the higher dynamic range at the input to handle louder volumes and attacks of the live instruments and don’t get distorted or suffer from input dynamic compression. But even the Widex 16-bit system they talk about in the 3rd paper can manage loud inputs after they shift its operating range. So if you wear that 16-bit Widex HA at a live music concert, you can probably enjoy the live music pretty well, too, even though it’s only 16 bits. It just goes to show that headroom/dynamic range and sound quality, while inter-related, are different attributes and shouldn’t be combined and generalized together in a definitive way.

Actually, I pretty much agree. 16 bit should be adequate, and I also have my suspicions that 24 bit may be as much of a marketing point as an engineering essential. There are likely other areas that are more important to good quality sound out of HA’s.

I am puzzled though by the number of posters who are convinced that their digital HA’s aren’t very good at music. Is there some specific weak link, or have they just not spent enough time on this? From the technology, it seems like that should be easy with digital.

My own experience so far has been great for speech, less than satisfactory for music, but as I said below, I’m still hopefull! (they’re 16 bit devices, but I doubt that’s significant!)

I don’t have a good enough understanding to explain it adequately, but I think it has to do with compression. Soft sounds are amplified much more than loud sounds and the amplification varies depending on the frequency. It’s beneficial for understanding speech, but not for enjoying music. Better is likely a good set of headphones. There are others who can do a much better job of explaining.

I think you hit the nail on the head there, MDB. As mentioned in the Widex paper (the 3rd link that Bob shared), dynamic compression of the input of 16-bit system is an easy/cheap way to prevent clipping/distortion when subjected to loud sound. It’s kinda like somebody holding the input volume knob and if the sound gets too loud, he turns the volume knob down to avoid distortion, and when the sound gets softer, he turns the input volume knob back up. If you hear very expressive music that changes volume very frequently and quickly, the effect of cranking the volume knob down and up and down and up will start making the music sound funny and not natural anymore. That’s what they call the pumping effect. And dynamic compression may very well be set differently at different frequencies as well, making it sound even more weird.

For speech it’s not as bad because speech tends to be more monotonous and doesn’t fluctuate as much in volume. But if you add speech with loud noise then it starts to get worse because the dynamic compression to cut down on the noise level negatively suppress the volume of the speech as well.

Absolutely. Analog aids merely amplified, thus, no weird artifacts from compression and fancy software algorithm manipulations. Of course, analog aids just plain clipped and distorted when you hit their limits, but until that point, they did music better than digital.

1 Like

Is there some technical reason for the poor reputation digitals seem to have for music? It seems like it should be simple to make them at least as good. Or is it just lack of acceptance. Some people swear by their old analog stereo, even though double blind tests don’t seem to support their claims!

Not necessarily technical but H/A are geared toward speech and not music as that is what the developers think that we need.

My new Widex Unique music mode is pretty good. It shuts off most (not all) processing and turns off compression filters. It can be done, it’s just hasn’t been done very well to date.

I think usually good music reproduction is synonymous with being able to reproduce good bass, which is a critical (but not only) attribute for a lot of contemporary music genres. The HA receiver just doesn’t have the physical construction to enable this. There’s only 1 tiny little receiver that has to be optimized for the whole frequency range, which is no match for a speaker system with bass and mid range speakers and tweeter speakers all separate.

The other aspect is what we’ve discussed earlier in the thread, the lack of sufficient dynamic range on the input mic, as well as the A/D converter, to handle the loudness of musical instruments live. Digital tricks used to avoid distortions like dynamic compression also cause pumping which makes music more unnatural at times.

Then there’s also limitation on the HA mics themselves. Due to their tiny construction, I’m sure they’re no match compared to professional mics that are 100 times bigger in size and with much better pick up materials.

One of the things that is essential in HAs is feedback control. One of the way to minimize feedback is to run the feedback test throughout all the frequency channels and reduce the gain of the resonant frequencies that cause feedback. First off, the obvious trade-off here is the reduction in headroom of the frequency bands where gain reduction is necessary for feedback control. Secondly, the effect is worse for HAs with less number of frequency bands (like 8 or 16) because the gain reduction may be more widespread than it needs to be. HAs with more frequency bands like 24 of 48 channels suffer less of the headroom reduction effect because of the ability to fine tune the gain reduction to a much smaller frequency bandwidth the higher the number of channels there is.

Thirdly, like JustEd was saying, if the HA is running a program geared toward speech, then things done to promote speech like noise reduction and directional beam forming, sometimes increasing the gain on the frequency bands where speech resides, or reducing reverb or echos in an open room, can ruin the quality of the music. That’s why most HAs have a music program which essentially removes all of the processing geared toward speech so that you can hear the music sound unadultrated. But some of the modern HAs now may have auto listening environment detection and auto program transition, which may not necessarily be smart enough to know what is music and what is just a lot of noise, so there may be a good chance for the wrong (non-music) program to be selected when music is being played.

Listening to music in a noisy environment (like while driving a car) is another challenge. How would the HA be smart enough to know what is music and what is noise. If you switch to a music program then you’ll have to deal with the noise because most likely music programs don’t have noise reduction. If you switch to a car noise program then the noise reduction designed for it will stifle the music.

But direct streaming is one area where you can eliminate a lot of the variables that are limitations, like mic construction/sensitivity/dynamic range, digital processing geared toward speech, management of noise, etc. That’s why direct streaming should sound pretty darn good if you have the good source content and quality, and the right expectation about the limitation of the tiny little receiver that’s tasked to reproduce the sound in your ear.

Somewhere on the forum somebody gave an excellent explanation. The problem is dynamic range. With normal hearing, one has the equivalent of a blimp’s worth of dynamic range (from very quiet sounds to very loud–perhaps 100-120dB of range) For hearing impaired, it’s crammed into a bicycle inner tube–the dynamic range might only be 40dB or less(80-120db)

1 Like

Isn’t the sample rate like 44.1 more important than 24 bit? What is the sample rate of HAs are they CD quality?

Sample rate determines the frequency response. The theoretical maximum frequency that a system can handle is half the sample rate. So a sample rate of 44.1 KHz – or more correctly, kilo-samples/sec. – can give you a frequency response up to 22 KHz.

Bit depth – the number of bits in each single sample – determines the dynamic range of the system, i.e. the range of levels it can handle.

But for hearing aids, frequency response isn’t an issue, since audio bandwidth is so restricted. I think the highest frequency any of the usual aids can handle is 10KHz. I think – maybe that’s the Oticon OPN?

So the answer to your question is no, no hearing aid system has “CD quality” frequency response (and it’s not necessary to use power and processing resources to try to get there).

But I’m kind of with the gone-but-not-forgotten Doc Jake on these issues. We are dealing with technically very challenging systems – someone else has mentioned the teentsy-tiny transducers here, to say nothing of the awkward placement of the microphones, at least.

As well, the last link in the chain – your hearing – is usually dramatically compromised by sensorineural damage, in terms of both frequency response and signal-to-noise ratio.

Congratulations to all of you for these threads. As Audio aficionado, now a little deaf, they are very very interesting.

jpeinado

Would the BOSE 700 Headsets (with Noise canceling) work better at concerts than any hearing aid on the market?

Headphones aren’t hearing aids.

I have been wondering about HA direct streaming and audio bitrate/compression? I’m guessing it’s low bitrate vs Bluetooth headphones? Is direct stream to Android better than iPhone? I have a cool idea for Bluetooth 5.2 streaming to HAs and headphones at the same time so you get bass but perhaps still compressed audio.

HAs say they have over 100db headroom?

I use Oticon OPN S 1 aids. The Oticon OPN are MFI aids so they use the Apple defined protocols for communicating with the aids. I haven’t found a good description of the MFI hearing aid protocols/bit-rates, etc. So I have no idea how they compare to classic or LE Bluetooth or Android. However, all Bluetooth audio uses some codec to transmit audio as Bluetooth doesn’t have the bandwidth to transmit raw CD quality stereo audio.

One way of discerning some streaming performance is to look the specs for a TV adapter. The technical data sheet for the Oticon TV Adapter 3 for my aids specifies an audio bandwidth of 10 KHz/stereo from input to hearing aids. It also specifies a Latency (TV adapter input to hearing aid speaker) of 25 msec for analog input, 28 msec for digital (TOSLINK) and 45 msec for Dolby Digital (TOSLINK).

For the hearing aids themselves, the input section is able to handle up to 113 db SPL without distortion and artifacts according to the OPN S product guide. There is also a statement that the processing works down to 5 db SPL implying a range of over 100 db. The same document states that the A/D uses 24-bit sampling for each microphone and the auxiliary input. It also uses a 24-bit DSP. The fitting bandwidth is specified as 10 kHz for the S 1 and 8 kHz for the S 2 and S3 versions.

Many of us hearing aid users have large losses at high frequencies and voice is not considered to have that much content above 8 kHz so cutting off response at 8-10 kHz is probably common but I haven’t tried to find the high cutoff frequency of other hearing aid brands/models. Your idea of both hearing aids and headphones may/may not work if the latencies of the two devices aren’t similar and if the hearing aid system blocks higher frequencies. The headphones would probably have (much) better bass response below 100 Hz or so.

See the Wirecutter review link in the following post. The following quote is from the very end of Butterworth’s article (LE Audio and the Future of Hearing):

Once again, the differences in sound quality among these codecs are subtle at best. This is why we don’t make the inclusion of certain codecs a major factor in evaluating Bluetooth headphones and speakers. The acoustical tuning of the speaker or headphone drivers and enclosure, and the tuning of the device’s digital signal processor, have an exponentially greater effect on sound quality—and your day-to-day enjoyment of your audio gear—than the Bluetooth codec does.

I would imagine the same considerations apply to HA’s - just further add the fit and gain adjustments in your HA’s to the list of things to consider first. He does mention in the article that the best situation is if the sound source, the BT broadcasting device, and the receiving device (e.g., headphones) all employ the same audio codec, then you will get the sound reproduced in your ears ungarbled by any cross-translation between codecs and he does say that the Sony audio codec is somewhat appreciably better than any of the others. I am mixing apples and oranges here a bit between audio compression and BT transmission codecs but I think the general idea is that you can’t make a silk purse out of a sow’s ear.

Wow. And here I am putting in my aids just hoping to hear anything.

I thought with Bluetooth 5.2 and the ability to stream to multiple devices in sync would be a cool way to have HAs and headphones at the same time. I also read in the spec that 5.2 can either have the same audio quality using less power and/or can have better audio quality at the same power consumption as current Bluetooth. Interesting I streamed test tones to Widex Moment and seem to get frequencies from around 130hz to 7khz. I can only hear to around 9 something khz but think hearing the extra frequency range would be helpful. I’m using an iphone 7 with Bluetooth 4.2 and had some audio glitches while doing this.

I agree with another post that HA manufactures don’t list what protocols they use. I would guess that Moments user Bluetooth 5 judging by the widex TV play which says 100Hz - 7kHz with 24ms latency. So I wonder if they Widex uses AAC protocal to stream to iPhone. I wonder if Phonak user proprietary protocols. I wonder if HAs could have wider frequency range or would that require to much power/processor and shorten battery life.