16 vs 24 bit converters and HA Audio quality,

I think the subject of 16 vs 24 bit HA sound, and music sound quality in general deserves a separate thread, so here goes:

This subject will be of interest mostly to those who’s hearing is good enough that it can be corrected to “reasonably” normal with respect to listening to music. At present, I fortunately fall in that category. With good headphones and a wide range equalizer, recorded music sounds quite good.

I’m fairly new to HA’s, and music through my HA’s don’t sound very good when compared to the equalized headphones. The HA’s work well for voice, but I’m still in the initial stages of getting them set up properly to work well with music. Not there yet, but I’m still hopeful!

A link was recently posted claiming no advantage in changing from 16 bit conversion to higher, typically 24 bit.

See: http://www.head-fi.org/t/415361/24bit-vs-16bit-the-myth-exploded

The author passionately defends 16 bit sound, perhaps too much so in my opinion. Although knowledgeable, he may be missing a significant point concerning the dynamic range of music. I have seen “experts” defend both viewpoints, 16 bit and 24 bit.

A good example in favor of 24 bit conversion is found here:

and here:

http://www.hearingreview.com/2013/03/designing-hearing-aid-technology-to-support-benefits-in-demanding-situations-part-1/

The differences of opinion seem to center around the maximum intensity of recorded music. In each case, the above authors state a value for the dynamic range, but give no reference as to where they got this value. Without listing the source and details, it is impossible to determine who is correct. There are also very likely additional factors involved, when it gets to actually putting into practice the basic principles being discussed.

A second problem is that none of these individuals seems to be directly involved with the actual detailed design of the conversion circuitry. I personally would tend to doubt anyone who does not have such experience. Working “in the field” doesn’t count in my book. Such subjects are much too complex to be debated with “true conviction” by people who are not actually doing the work!

It will be interesting to see how this plays out!

You seem to have posted the same link twice, maybe by error?

I think the 144dB for 24-bit and 96dB for 16-bit dynamic range is well established. You just gotta google more if you want to understand the technical details on this. Both the sources you cited above used the same value, right? So there’s no conflicting information there.

And I also think both sources are also in agreement that 24-bit gives more headroom (against the noise floor), so it’s better to record using 24-bits vs 16-bits because you don’t need to record super hot to minimize the noise floor if you use 24-bit. They’re both in agreement there, too.

The head-fi.org paper simply states that if you playback something recorded in 16 bit, and the same thing recorded in 24 bit, you most likely won’t be able to tell the quality difference between them. If you have an exceptionally high noise floor, you may hear a little more noise on the playback of a 16 bit recording vs a 24-bit recording, that is if the recording material has enough of very quiet passages to enable you to detect this. But the overall quality between the 2 recordings (except for the noise floor) is not really discernable by the human ear.

The homestudiocorner paper doesn’t disagree with the head-fi.org statement either. It simply didn’t address the issue of whether one can tell the difference between the quality of the 24-bit vs 16-bit recording or not, on the playback end. It simply said that it’s better to record in 24-bit, but make your final master (the playback) 16-bit if you want. By this, it reaffirms what the head-fi.org paper said, that the 24-bit has the advantage for the recoding/mixing process. But on the playback, most people wouldn’t be able to tell the quality between the 2 bit-dept recordings.

So I don’t see any inconsistency or disagreement between the 2 papers.

Doc Jake, Sorry to wake you, go back to sleep!

Sorry, I copied the same link twice. Correct second one is:

http://www.hearingreview.com/2013/03/designing-hearing-aid-technology-to-support-benefits-in-demanding-situations-part-1/

Actually I think they do disagree, most obviously on some of the basic assumptions. The “scientific principals” they do agree on, however they don’t agree on the assumptions that go with these principals to answer the real world question of how to get the best sound.

The 16 bit guy claims 60dB is enough. For one of the authors I could not spot the actual claimed level, and the other listed it as "peak values have been reported as high as 117 and 120 dB SPL.[SUP]12,13[/SUP] ", with references. There may also be an apples to oranges difference here, clouding the issue further.

A third reference ( http://www.audiologyonline.com/articles/programming-hearing-aids-for-listening-12915)
claims music to require higher levels: “even quiet music can reach levels of 105 dBA, with peaks in excess of 120 dBA”. Although references are cited, they are not specifically referenced, so I have no idea where these numbers came from. (I’m not ambitious enough to search!)

Overall, my point is that “experts” disagree on whether higher bit depths than 16 are necessary and/or useful, and we shall see how it plays out!

Isn’t all that a bit moot when your single driver receiver can only reproduce an effective bandwidth of about 10KHz with an effective dynamic range of about 90dB in the middle and around 60dB at the ends. The whole signal is going to be demodulated to its physical performance.
Same applies at the input. Your realistic noise floor is around 20dB due to Brownian motion. Sounds greater than 110-115 dB don’t require sampling as you hear them directly whether they saturate the hearing aid response or not. 96dB ought to be adequate.
That’s before you consider the personal needs of the wearer and the compression the aid uses to deal with their reduced dynamic range.

“Overall, my point is that “experts” disagree on whether higher bit depths than 16 are necessary and/or useful, and we shall see how it plays out!”

When the output goes to a 1/8" diameter speaker, that all seems academic (if you expect hi fidelity music). I think I agree with the doc. :slight_smile:

Yeah, been a slow day!

I am actually surprised they can do as well as they do with small speakers, still, there’s no physical reason I know of that prevents good sound out of small ones.

I like this third paper/link (written by Widex people) that you shared a lot. That’s because it addresses the dynamic range issue specific to the HAs. But it’s interesting to note that they talk about shifting the operating range of their 16-bit A/D converter and not really using the obvious solution of just going to a 24-bit A/D converter. But that’s probably due to their platform/design limitations and they’re stuck with finding workarounds for the 16-bit design. Other HAs like the Oticon OPN has 24-bit processing and is able to get 114 dB SPL on the input side without having to do any kind of workaround like operating range shifting.

But again, I still don’t see any conflict between all 3 papers. This Widex paper focuses the dynamic range advantage to avoid distortion at the INPUT stage of the HAs due to very loud sounds, like at a live music environment. That’s just another angle and another advantage of having 24-bit A/D converter and processing. But that’s a different issue entirely than saying that 16-bit processing is not good enough for music compared to 24-bit and human can tell the difference.

What I’m trying to say is that the headroom issues that a 24-bit system solves (more noise, distortion at high input level, speech in noise due to distortion, etc), while real, does not make 16-bit processed sound have a discernible lower sound quality than 24-bit processed sound that a normal person can tell. For most normal hearing situations, they’ll sound the same, quality wise. If you play music from your sound system or watch TV at a reasonable listening level, I don’t think you can tell the difference between 16-bit and 24-bit HAs.

Now if you go to a concert or listen to live music, then 24-bit HAs will sound better for sure because they can take in the higher dynamic range at the input to handle louder volumes and attacks of the live instruments and don’t get distorted or suffer from input dynamic compression. But even the Widex 16-bit system they talk about in the 3rd paper can manage loud inputs after they shift its operating range. So if you wear that 16-bit Widex HA at a live music concert, you can probably enjoy the live music pretty well, too, even though it’s only 16 bits. It just goes to show that headroom/dynamic range and sound quality, while inter-related, are different attributes and shouldn’t be combined and generalized together in a definitive way.

Actually, I pretty much agree. 16 bit should be adequate, and I also have my suspicions that 24 bit may be as much of a marketing point as an engineering essential. There are likely other areas that are more important to good quality sound out of HA’s.

I am puzzled though by the number of posters who are convinced that their digital HA’s aren’t very good at music. Is there some specific weak link, or have they just not spent enough time on this? From the technology, it seems like that should be easy with digital.

My own experience so far has been great for speech, less than satisfactory for music, but as I said below, I’m still hopefull! (they’re 16 bit devices, but I doubt that’s significant!)

I don’t have a good enough understanding to explain it adequately, but I think it has to do with compression. Soft sounds are amplified much more than loud sounds and the amplification varies depending on the frequency. It’s beneficial for understanding speech, but not for enjoying music. Better is likely a good set of headphones. There are others who can do a much better job of explaining.

I think you hit the nail on the head there, MDB. As mentioned in the Widex paper (the 3rd link that Bob shared), dynamic compression of the input of 16-bit system is an easy/cheap way to prevent clipping/distortion when subjected to loud sound. It’s kinda like somebody holding the input volume knob and if the sound gets too loud, he turns the volume knob down to avoid distortion, and when the sound gets softer, he turns the input volume knob back up. If you hear very expressive music that changes volume very frequently and quickly, the effect of cranking the volume knob down and up and down and up will start making the music sound funny and not natural anymore. That’s what they call the pumping effect. And dynamic compression may very well be set differently at different frequencies as well, making it sound even more weird.

For speech it’s not as bad because speech tends to be more monotonous and doesn’t fluctuate as much in volume. But if you add speech with loud noise then it starts to get worse because the dynamic compression to cut down on the noise level negatively suppress the volume of the speech as well.

Absolutely. Analog aids merely amplified, thus, no weird artifacts from compression and fancy software algorithm manipulations. Of course, analog aids just plain clipped and distorted when you hit their limits, but until that point, they did music better than digital.

1 Like

Is there some technical reason for the poor reputation digitals seem to have for music? It seems like it should be simple to make them at least as good. Or is it just lack of acceptance. Some people swear by their old analog stereo, even though double blind tests don’t seem to support their claims!

Not necessarily technical but H/A are geared toward speech and not music as that is what the developers think that we need.

My new Widex Unique music mode is pretty good. It shuts off most (not all) processing and turns off compression filters. It can be done, it’s just hasn’t been done very well to date.

I think usually good music reproduction is synonymous with being able to reproduce good bass, which is a critical (but not only) attribute for a lot of contemporary music genres. The HA receiver just doesn’t have the physical construction to enable this. There’s only 1 tiny little receiver that has to be optimized for the whole frequency range, which is no match for a speaker system with bass and mid range speakers and tweeter speakers all separate.

The other aspect is what we’ve discussed earlier in the thread, the lack of sufficient dynamic range on the input mic, as well as the A/D converter, to handle the loudness of musical instruments live. Digital tricks used to avoid distortions like dynamic compression also cause pumping which makes music more unnatural at times.

Then there’s also limitation on the HA mics themselves. Due to their tiny construction, I’m sure they’re no match compared to professional mics that are 100 times bigger in size and with much better pick up materials.

One of the things that is essential in HAs is feedback control. One of the way to minimize feedback is to run the feedback test throughout all the frequency channels and reduce the gain of the resonant frequencies that cause feedback. First off, the obvious trade-off here is the reduction in headroom of the frequency bands where gain reduction is necessary for feedback control. Secondly, the effect is worse for HAs with less number of frequency bands (like 8 or 16) because the gain reduction may be more widespread than it needs to be. HAs with more frequency bands like 24 of 48 channels suffer less of the headroom reduction effect because of the ability to fine tune the gain reduction to a much smaller frequency bandwidth the higher the number of channels there is.

Thirdly, like JustEd was saying, if the HA is running a program geared toward speech, then things done to promote speech like noise reduction and directional beam forming, sometimes increasing the gain on the frequency bands where speech resides, or reducing reverb or echos in an open room, can ruin the quality of the music. That’s why most HAs have a music program which essentially removes all of the processing geared toward speech so that you can hear the music sound unadultrated. But some of the modern HAs now may have auto listening environment detection and auto program transition, which may not necessarily be smart enough to know what is music and what is just a lot of noise, so there may be a good chance for the wrong (non-music) program to be selected when music is being played.

Listening to music in a noisy environment (like while driving a car) is another challenge. How would the HA be smart enough to know what is music and what is noise. If you switch to a music program then you’ll have to deal with the noise because most likely music programs don’t have noise reduction. If you switch to a car noise program then the noise reduction designed for it will stifle the music.

But direct streaming is one area where you can eliminate a lot of the variables that are limitations, like mic construction/sensitivity/dynamic range, digital processing geared toward speech, management of noise, etc. That’s why direct streaming should sound pretty darn good if you have the good source content and quality, and the right expectation about the limitation of the tiny little receiver that’s tasked to reproduce the sound in your ear.

Somewhere on the forum somebody gave an excellent explanation. The problem is dynamic range. With normal hearing, one has the equivalent of a blimp’s worth of dynamic range (from very quiet sounds to very loud–perhaps 100-120dB of range) For hearing impaired, it’s crammed into a bicycle inner tube–the dynamic range might only be 40dB or less(80-120db)

1 Like

Isn’t the sample rate like 44.1 more important than 24 bit? What is the sample rate of HAs are they CD quality?

Sample rate determines the frequency response. The theoretical maximum frequency that a system can handle is half the sample rate. So a sample rate of 44.1 KHz – or more correctly, kilo-samples/sec. – can give you a frequency response up to 22 KHz.

Bit depth – the number of bits in each single sample – determines the dynamic range of the system, i.e. the range of levels it can handle.

But for hearing aids, frequency response isn’t an issue, since audio bandwidth is so restricted. I think the highest frequency any of the usual aids can handle is 10KHz. I think – maybe that’s the Oticon OPN?

So the answer to your question is no, no hearing aid system has “CD quality” frequency response (and it’s not necessary to use power and processing resources to try to get there).

But I’m kind of with the gone-but-not-forgotten Doc Jake on these issues. We are dealing with technically very challenging systems – someone else has mentioned the teentsy-tiny transducers here, to say nothing of the awkward placement of the microphones, at least.

As well, the last link in the chain – your hearing – is usually dramatically compromised by sensorineural damage, in terms of both frequency response and signal-to-noise ratio.

Congratulations to all of you for these threads. As Audio aficionado, now a little deaf, they are very very interesting.

jpeinado