Why do some hearing aids have number of channels that are not a power of two?

As I understand it, digital hearing aids perform fourier transform (FFT) on the incoming sound signal from the microphone. The mic sampling frequency and the number of FFT points determine the frequency resolution of the hearing aid. For example, a hearing aid with 16000 Hz mic sampling frequency that performs a 64 point FFT would have 32 channels, with frequency resolution of 250 Hz. Number of FFT points is usually a power of two, so I can understand a hearing aid having 32 channels.

But what happens in a hearing aid with number of channels that are not a power of 2? I have seen on the website of some manufactures who say they have hearing aids from 4 to 24 channels, with recommended value being 12 channels, etc. How do they achieve these non-power-of-2 channels? For a hearing aid with 12-channels, is it internally actually a 16 channel hearing aid, and they just ignore the remaining 4 channels? Any ideas for understanding this would be highly appreciated.

I think @Um_bongo and @Neville could provide better information

My conclusion is that marketing drives this. Many platforms appear to be developed to support the most capable model (most profit) and then they are ‘dumbed’ down to lower price points to increase sales. From an engineering perspective this is quite common. IMHO of course.

Oh goodness, I have no idea. It’s not something I’ve ever put much thought into. But even at the premium levels you run into places where channels aren’t a power of two. I’d expect that there isn’t a simple answer. The hearing aids are trying to do all sorts of things and are also limited by all sorts of things. Different features may also have different numbers of channels. But Um Bongo may have a better answer, he better at hardware than I am.

Thanks. I am not sure if I can even understand the subject here let alone answering


This might be relevant. It’s from Fitting and Dispensing Hearing Aids, Third Edition by Brian Taylor and H. Gustav Mueller.


Well, of course. They aren’t going to make 5 chips when you just need one loaded each level of tech.

1 Like

Two or Three reasons;

  1. Marketing guff/performance.

  2. Redundancy/practicality: take your nominal 16KHz sample above; depending on how your DAC is configured, you’ll have twice over sampling built-in to keep the resolution of the sound there. That gives you 8Khz, which like you say, should fall out at 250Hz per channel. But that’s going to give you a huge number of channels at 8Khz and hardly any resolution at 250 Hz. So, you average per octave not frequency. Then you take a look at the actual bandwidth you need…… sorry, I actually fell asleep in my own sentence there…. Also lots of stuff about the practicality of overlapping sample bands etc.
    Caveat: unless the AI interpretation of speech requires more resolution to work effectively.

  3. Output resolution: bear in mind that the entirety of this signal is going be fed down a couple or more strands of Litz wire and demodulated by a ‘receiver’ with one coil and beam/reed. So for all the fancy splitting and separating you do at the front; you can’t really send one signal down one channel and then something different down the adjacent one, 16dB per adjacent octave is about the best: so even if you can split the sound into 10,000 channels (I remember the SwissEar), your output device just mushes it back together.

I said a few years ago on here, that more than 7-8 channels is probably wasted and 12 was plenty. (Unlike Bill Gates) that’s probably still true if those channels align with peak resonances of the output.

It’s like asking how many cylinders does your car really need……

1 Like

This is what I’ve understood the specification for channels to mean, he number of bands the full range is broken down to for setting up the gain (at levels) per each section of the full range.

Edit: I’m referring to @x475aws reply above.

But in a cookie bite loss, overlapping loss (increased volume) over the normally heard frequency is not going to feel good. Is 8 channel or blocks really sufficient in that case?

Current Signias 7 AX [Rextons at Costco] have 48 channels for environmental AI processing (I’ve read that it’s actually 96 because they double it using both mics, front/back)
They have 20 “handles” for gain manipulation (over 10kHz range) and as DIYer I feel like it’s about sufficient for getting specific frequencies/sounds right.
Recently I had issue where only the “ch” sound of speech was annoyingly loud , and coincidentally my slope is the most steep at this range.
Without enough division I wouldn’t be able to get it right.

1 Like

Channels are often drawn as blocks, but they’re actually rounded sinusoidal curves with the tops flattened. The peak of the channel sits where the peak of the curve does and the ‘sides’ of the curve slope away at x dB per octave. Technically the more channels the steeper each one needs to roll off. But there are limits.

I’m not 100% on what amplifiers are capable of these days but 32dB per octave used to be a realistic limit. You need some overlap between channels to reproduce a smooth signal without introducing distortion too.

Add to this the physical performance of the receiver which effectively decodes and smooths the signal further, coupled with the actual function of your basilar membrane which can’t really tell adjacent sounds of huge intensity apart by more than a certain number of dB per Octave due to upward and downward spread of masking.

So, although we like to digitise everything and put it in nice square boxes, it doesn’t output quite like that and even then the output is decoded in your head by hair cells sitting in jello/y.

1 Like

My uneducated guess is that, yes, they doing processing over a larger number of channels/bins, then group them down into small bands with common controls for marketing purposes.