As I understand it, digital hearing aids perform fourier transform (FFT) on the incoming sound signal from the microphone. The mic sampling frequency and the number of FFT points determine the frequency resolution of the hearing aid. For example, a hearing aid with 16000 Hz mic sampling frequency that performs a 64 point FFT would have 32 channels, with frequency resolution of 250 Hz. Number of FFT points is usually a power of two, so I can understand a hearing aid having 32 channels.
But what happens in a hearing aid with number of channels that are not a power of 2? I have seen on the website of some manufactures who say they have hearing aids from 4 to 24 channels, with recommended value being 12 channels, etc. How do they achieve these non-power-of-2 channels? For a hearing aid with 12-channels, is it internally actually a 16 channel hearing aid, and they just ignore the remaining 4 channels? Any ideas for understanding this would be highly appreciated.
My conclusion is that marketing drives this. Many platforms appear to be developed to support the most capable model (most profit) and then they are ādumbedā down to lower price points to increase sales. From an engineering perspective this is quite common. IMHO of course.
Oh goodness, I have no idea. Itās not something Iāve ever put much thought into. But even at the premium levels you run into places where channels arenāt a power of two. Iād expect that there isnāt a simple answer. The hearing aids are trying to do all sorts of things and are also limited by all sorts of things. Different features may also have different numbers of channels. But Um Bongo may have a better answer, he better at hardware than I am.
Redundancy/practicality: take your nominal 16KHz sample above; depending on how your DAC is configured, youāll have twice over sampling built-in to keep the resolution of the sound there. That gives you 8Khz, which like you say, should fall out at 250Hz per channel. But thatās going to give you a huge number of channels at 8Khz and hardly any resolution at 250 Hz. So, you average per octave not frequency. Then you take a look at the actual bandwidth you needā¦ā¦ sorry, I actually fell asleep in my own sentence thereā¦. Also lots of stuff about the practicality of overlapping sample bands etc.
Caveat: unless the AI interpretation of speech requires more resolution to work effectively.
Output resolution: bear in mind that the entirety of this signal is going be fed down a couple or more strands of Litz wire and demodulated by a āreceiverā with one coil and beam/reed. So for all the fancy splitting and separating you do at the front; you canāt really send one signal down one channel and then something different down the adjacent one, 16dB per adjacent octave is about the best: so even if you can split the sound into 10,000 channels (I remember the SwissEar), your output device just mushes it back together.
I said a few years ago on here, that more than 7-8 channels is probably wasted and 12 was plenty. (Unlike Bill Gates) thatās probably still true if those channels align with peak resonances of the output.
Itās like asking how many cylinders does your car really needā¦ā¦
This is what Iāve understood the specification for channels to mean, he number of bands the full range is broken down to for setting up the gain (at levels) per each section of the full range.
But in a cookie bite loss, overlapping loss (increased volume) over the normally heard frequency is not going to feel good. Is 8 channel or blocks really sufficient in that case?
Current Signias 7 AX [Rextons at Costco] have 48 channels for environmental AI processing (Iāve read that itās actually 96 because they double it using both mics, front/back)
They have 20 āhandlesā for gain manipulation (over 10kHz range) and as DIYer I feel like itās about sufficient for getting specific frequencies/sounds right.
Recently I had issue where only the āchā sound of speech was annoyingly loud , and coincidentally my slope is the most steep at this range.
Without enough division I wouldnāt be able to get it right.
Channels are often drawn as blocks, but theyāre actually rounded sinusoidal curves with the tops flattened. The peak of the channel sits where the peak of the curve does and the āsidesā of the curve slope away at x dB per octave. Technically the more channels the steeper each one needs to roll off. But there are limits.
Iām not 100% on what amplifiers are capable of these days but 32dB per octave used to be a realistic limit. You need some overlap between channels to reproduce a smooth signal without introducing distortion too.
Add to this the physical performance of the receiver which effectively decodes and smooths the signal further, coupled with the actual function of your basilar membrane which canāt really tell adjacent sounds of huge intensity apart by more than a certain number of dB per Octave due to upward and downward spread of masking.
So, although we like to digitise everything and put it in nice square boxes, it doesnāt output quite like that and even then the output is decoded in your head by hair cells sitting in jello/y.
My uneducated guess is that, yes, they doing processing over a larger number of channels/bins, then group them down into small bands with common controls for marketing purposes.