I believe you’d have to have a hearing aid that has a feature that needs or can take advantage of the higher converter resolution. There is a parameter in amplification referred to as “instantaneous dynamic range” which refers to your ability to distinguish between simultaneous soft and loud sounds. It applies both to your HA and to you. I believe the Alta2 Pro has improved instantaneous dynamic range.
I’m a retired electronics engineer so I tend to use engineering terms. HA manufacturers create a lot of goofy terms that can be hard even (or maybe especially) for a EE to “decode” and understand.
All that said, I reallly like the Alta2 Pro and one of the reasons I went with it over the Alta Pro was IDR.
I have the most natural 360 degree perception of sound with this model.
If 16-Bit can give a 96 dB input range, given that the newest mics have a noise floor of around 20 dB, the theoretical dynamic range is 20-116dB or very quiet to around/above your UCL.
24-bit just means longer/slower coding at A-D and more processor cycles chewed for less processing than the equivalent chip could do with 16-bit, and the same for D-A. There must be a better engineering reason for implementing it than the dynamic range resolution or doing it just because they could.
In a completely different context, I often see this type of question asked. The bottom line is that the number of bits from the ADC above about 10 or 12 is totally irrelevant as the (electrical) ‘noise’ that is present everywhere will mean that any additional bits will end up begin random.
Consider that 12bit ADCs divide their range into 4096 ‘steps’. That means the least significant bit will change the value by 0.025%. With a 16-bit ADC, that is a 0.0015% difference.
If people can hear a difference, then it is almost certainly something else.
Susan
24bit ADC is going to make a BIG difference in terms of compression as it gives considerably more headroom to work with. To dumb it down a bit it captures higher peaks and lower lows which gives the compression algorithms much more signal/better slopes to work with. It will likely make the biggest difference for how the HAs are able to both choose and deal with ranges for multi-band compression and ultimately better resolution for how they try and deal with loud environments and other types of intelligent cancellation. Yes it technically takes more processing power, but processing power is cheap these days, specifically in terms of power requirements (which I’m sure is a huge factor in HAs).
In engineering terms the short answer is that it will give the software much more signal to work with, which will give better results for the more complex requirements. The actual “simple” amplification end of things won’t really be affected very much if at all, but I think it’s going to be a good thing and is not just hype.
Also worth noting … the article referenced in the first post was written by my former audiologist! While he does have a wonderful grasp on the technology and mechanics behind it all he doesn’t do his own HA fitting but has another guy do it for him because he’s self-admittedly “not much of an electronics guy” – and let’s just say that it didn’t work out well for me.
Just because it’s 24bit doesn’t make it instantly better, it just gives the programmers better opportunity to make it better if they do the right things with it.
Again it depends on what the OS programmers do with the hearing aid itself If it gives them a way to make an audible difference then it’s a success. If they just take it and run with the marketing department … who cares.
I´d sure like to try. I have a long and winding road behind me concerning artifacts of hearing aids. In the end I had some real experts confirm me that the things I hear as artifacts are actually there, but usually even people without hearing loss don´t hear them, and practically noone with hearing loss hears them. But the ears are only one part of the system: I may have bad ears, but for all my life my brain was trained to hear every detail. So as long as the hearing aid presents the detail loud enough that I can hear it, I will also hear the artifact if it is loud enough. During my trial I was very pleased with a siemens binax aid, but I discarded it due to distortion problems. It is much easier to avoid distortion with a 24 bit aid, so maybe the first 24 bit binax aid will be better for me than my 16 bit bernafon aid - who knows. Of course, 94 dB dynamic range is more than enough for the hearing impared, because our dynamic range is less than that. But it takes some tricks to handle the input, which has more than 94 dB dynamic range. Less tricks may mean easier algorithms, which might result in less artifacts, who knows. I will definitely try 24 bit aids when I need my next aids.
Here’s the thing, I don’t think you will ever know for sure if the change in chip is what allowed the person to hear differently, simply because they will never change the chip in isolation. They will release that change with a new generation of device which will have several changes. So which change made the difference?
All I know is that back when I started fitting, around the office, everyone knew who the “difficult” patients were. I made everyone stop calling them “crazies” because I believe they all have legitimate complaints - and just need legitimate explanations for those complaints. I also know that a lot of people say that one generation newer product does not make that big of a difference. But I KNOW that each time a new product comes out, that group of “difficult” patients gets smaller. There are always a few of them who say something like, “This is what I was hoping for when I bought my last ones. Why couldn’t I have got these back then?”
So even though they may not be able to describe exactly what it is that is better, they ARE able to detect a difference and some detect enough difference that it is worth it to them to upgrade. So will someone ever say, “Man, that 24-bit sound is so much better than the crappy 16-bit sound!”? Probably not. But if they like the newer device’s sound better, who is to say that the reason is not because of the greater bit depth?
Good answer…the techie term is “instantaneous dynamic range” meaning the amplitude of digitized sounds can handle teenier and bigger at the same time…as you said, better compression opportunities.
This is a 4 year old thread talking about a move to 24 bit processing. I am curious. Have any of the hearing aid manufacturers actually done this? I am aware of a article promoting Widex True Input Technology, from 4 years ago. It seems to be a Dolby like technique to increase dynamic range while still using 16 bit processing. Their claim is that it increases peak input dynamic range of the A/D processor to take advantage of the peak input that analog microphones can handle (113 dB or so). It is claimed to have benefits in use while listening to live music. This technology is said to be in the Dream model they sold back then. I believe the current model is the Evoke, but I don’t see them making much of a deal about the high dynamic input range capability, or anything about using more than 16 bit technology.
I’m wondering if this whole idea of higher input range has faded from interest?