I post these because I read medical news, but I don’t always understand them. Will this eventually translate into improvements for in-the-ear hearing aids?
@allenmoretsky Thank you for the article.
Does this means, that current recipients of CI need software adjustments without resorting to surgery?
To try to summarize and simplify, for logistical reasons previous research focussed on how the cochlea functioned at higher frequencies; we couldn’t physically measure at lower frequencies because of limits in our technologies relative to anatomical access. As a result, we had to say, “this is how things appear to work in the cochlea at high frequencies and so probably low frequencies, too”. (Not that there isn’t still ongoing discussion about precise mechanisms at those higher frequencies.) But there have been some other reasons to think that low frequencies might not work the same way, which is why these guys are trying to investigate it. What this paper introduces is a new methodology for directly looking at the function of the cochlea in response to lower frequency sounds (~650 Hz and below) in experimental animals (guinea pig), and results seem to support one (existing) theory and reject another. At the end of the day, the researchers are not suggesting that previous research is incorrect (which the media article seems to imply), they are looking at an area where there is a gap–where we have only been able to infer function indirectly and then questioned ourselves because some other converging evidence was inconsistent with that inference–and presenting new methods to collect data directly. It’s cool stuff, and it contributes to ongoing discussion and refinement of understanding. As with all new work, this needs to be scrutinized and replicated, and scientific consensus will move along with it, or not.
It will not lead to a change for hearing aids that I can see off-hand. Perhaps a change for cochlear implants, with the exception that we’ve already seen that low frequency perception in cochlear implant users is more highly dependent on time codes in human CI research. Indeed, the authors reference that work in their discussion. So rather than a big change, this sort of just further supports something we were seeing already.
My understanding is that the actual thinking is that the cochlea is split into different zones, each zones takes care of certain frequencies. (not from the article, but from youtube)
But this article is saying that the whole cochlea does get simulated on all frequencies, meaning there is no zoning.
If, one part of the cochlea is unusable then the rest is capable of doing the job, hence the term “robust” in the article.
Which is a big deal.
Not sure if that’s correct.
This research was only looking at low frequencies, and not disputing the place code at higher frequencies. Although, at least when I was more involved with this stuff years and years ago, there was still ongoing discussion about the importance of the characteristic frequency that the a particular nerve fibre responds to versus the slope of change of its response across frequencies and population codes involving such–I particularly remember this debate in higher regions, but it could reasonably apply to the cochlea as well. But I’ve been out of this for a long time.