I think I’m backtracking on my tentative agreement re dynamic range. Someone – @volusiano maybe? – has remarked here on the 96 dB limitation of 16 bit sampling – and I actually think the limitation we are talking about is maximum SPL. In other words, the maximum absolute “loudness” that the microphone will handle without distortion.
Because, 1, that 96 dB limitation, but mainly because of the expansion that get applied in the processing in the hearing aids. Everyone talks about frequency-dependent level compression, but I think we miss the fact that there’s a lot of expansion applied to low-level sounds.
The compression that sensorineural hearing loss seems to engender – not least because of recruitment – means, I think, that the dynamic range of our hearing is greatly compromised. So we’re not trying to get 110 dB (or whatever) through a linear chain. We’re trying to get 110 dB into, what, 60 or 70 dB by the time it gets to your brain?
“Dynamic range” is, essentially, signal-to-noise ratio. And given how bad the final processing is – between our eardrums and our brains – noise floor isn’t really an issue. So the aids can expand the s#!t out of lower input levels (and they do).
Maybe dynamic range matters because the microphones also have to handle low-level, low-crest-factor signals like speech? Maybe this is the point that I’m missing. If the microphone needs to be able to handle, say, 100 dB SPL for music peaks (or more?), then for low-level signals like speech, the effective SNR is going to be low (because of all the headroom that needs to be there for music peaks).
I really apologize for thinking out loud. I’m enjoying this conversation. I spent many years in the pursuit of linearity – haha except for the “loundenss wars” – so dealing with these wildly non-linear systems is really a puzzle.