Will AI Hearing Aids be Boom or Bust?

Battery technology is advancing along with processor technology and AI. Lithium-Ion batteries can provide 20% greater capacity in batteries that are 20% smaller. So, the batteries of the future will not necessarily have to be bigger to provide the power for the DNN.

It will be interesting to see if Bluetooth LE is a game changer. If there is no audible lag a cellphone can become a processor for the hearing aid offering a huge jump in power. Where I think AI is most useful is in distinguishing voices from background noise in real time. This has been achieved in the lab but is beyond the power of the CPU’s currently used in hearing aids. Correct me if I’m mistaken.

We already have BLE, it’s LE Audio that is the “game changer” and I think your right, between HAs/Smartphone and AI with LE Audio will be the game changer.

1 Like

Thanks for the update. I understand why I was confused earlier. The real question is whether the latency is low enough. If phones were used for processing, the audio would have to travel in both directions and get processed fast without any lag whatsoever. Perhaps it can all be done in the HA. For the non hearing challenged - seamless translation in real-time.

2 Likes

There are significant limitations to what AI in the hearing aid can achieve, as you are limited to on-chip neural processors. And with the power constraints (Chip power draw, and battery size) there is likely fairly small neural processing units in the chips. I would love to see the Oticon Polaris chip layout compared to say the M2 Apple CPU which has a large neural processor, to see what they are using. The power constraints as well as the huge loss from economy of scale (Less volume) mean they are likely several generations behind state-of-art silicon process technology as well.
As stated by ‘user246’ there is a lot of opportunity with AI induced improvement in fitting software, here you are not limited to on-chip and can use the ‘traditional’(?) server based machine learning.

1 Like

Much as I’m an advocate for LE Audio, it may be Ultra Wide Band that brings latency down to where off-device processing become feasible.

2 Likes

The holly Grail is speech understanding in complex environments, this is the #1 most sought aftee benefit

2 Likes

Interesting discussion, information, and speculation. For me the area in which I’d love to see imporvement is in audio lisening, decoding, training programs for the user. While what gets processed and input to the ear is vital and important, it is what my brain does with that that ultimately determines whether my speech comprehension is good or terrible. The only audio online training I have ever done is LACE with some tested improvement of speech in noise. I imagine that if as much time and energy went into improving brain audio processing as it goes into HA improvements that the synergistic impact over time of addressing the entire audio processing stream would bear fruit. However I also expect that it is in the HAs that companies see that they can make profit; also many users are not motivated to undertake some training and put the time in. I too would love some magic bullet of HAs to make things effortless, but for many this is just a pipe dream for the next chunk of years. FYI I have connection with LACE. A few people on forum have done LACE with mixed results; it is another tool, unlikely to be a miracle. FYI, _____auditorytraining.com/. replace the start of URL with lace if you wish link to work or do a search on LACE audio training.

For many, not for all. I want speech understanding, but not at the cost of hearing natural sounds

2 Likes

I think that there are too many people moving forward with this for there not to be real benefits in the medium to long term. Eg

https://audioxpress.com/news/audiosourcere-sets-new-standard-for-real-time-ai-powered-audio-separation

“Very soon, this technology will be used to power everyday features in personal audio devices running in real time to allows us to better understand conversations in a loud restaurant, remove annoying commentary from a live broadcast, or protect our hearing from loud noises. Those will be demonstrated at CES 2024, in Las Vegas.”

1 Like

AI is not just a fad used in the hearing aid industry only. AI is used in many other facets of life, increasingly everywhere. Hearing aid companies that don’t use AI will be left behind eating dust eventually. The difference now is the extent of AI that a hearing aid company uses, just barely enough to be able to claim that they have AI in their aids like anybody else, or extensively enough to make a real difference to their products.

How is AI different than, you know, contemporary computer chip technology?

I can see AI being used in ads for the next generation of HAs. For all I know, I already have AI in my current aids. It’s just not called that.

Is noise reduction in a restaurant a program or AI???. Etc.

sorry, but the technology will NOT interface with my brain neurons in any meaningful way. It can not learn in any meaningful way, although it may help to map each person’s particular loss in each ear and give better results–sort of like setting optimum settings for live situations. Anyway this is my fantasy of what AI might do…but I’m predicting a lot of disappointment.

Contemporary chip technology relies on programming algorithms in the software to process data. Algorithms have their limits as the logics and rules are explicit and have boundaries that prevent them from being able to be all-around comprehensive and exhaustive enough to effectively cover all the multitudes of (millions) conditions and situations.

AI is data driven and is trained by data. The more data is used to train the AI, the smarter and better the AI is at handling its tasks, and with sufficiently enough data to train it, it can become very, very good at doing its job.

Take a simple example and analogy, handwriting recognition for letter sorting at the post office. Software algorithm can probably be easily written to recognize addresses that have “printed” letters based on a number of fonts. In this case, its limited data set for recognition are the various fonts used for printed text available so far.

But if you have handwritten addresses on a letter, simple computer algorithms would not work well on them because the data used by the computer algorithm is limited to recognizing the fonts in its database. Handwritten addresses, on the other hand, can take virtually limitless forms due to the free-hand nature of writing, although still adhering to certain principles so that a human can still make out what that squiggly letter means. Even if a hand-written letter is not legible, looking at the composition of the whole word can lead a human to deduce what the illegible letter is supposed to be.

Anyway, the post office can use all of the handwritten addresses it has as data (and a massive amount of data it is) to feed in and train an AI system to be able to recognize the free-hand writing and convert it into the correct texts.

I know this is a very simplified analogy and example, and actually before the age of AI, the post office probably has already devised complex enough computer software algorithms to recognize free-handwritten addresses. But if AI technology has been around earlier way back in the days, it probably would have been easier and simpler and more likely less time to devise and implement an AI based system to recognize free-form handwriting than to come up with very complicated computer-based software algorithms to sort out the mail.

The bottom line is that algorithms based software has its limits because algorithms are primarily logic-based and rule-based, to be used against a set of data. But if the set of data becomes enormous, algorithm based software runs into the limitation of the computer power and the limitation of the storage needed for big data.

AI (specifically deep neural network) on the other hand, removes these 2 limitations in live performances, because the massive amount of training and the massive amount of data is already done up front (or behind the scenes, so to speak), and the “smart” results are already condensed from the massive amount of training and data that was already done up front, and is now already available for you to use.

Another analogy here is that if you could copy the tennis portion from the brain of Roger Federer and install it into your own brain, then you will have been able to benefit from it in order to play tennis as well as he does. All those years of intensive training and the massive amount of experience that he gained playing on the tours has been condensed into his tennis deep neural network, available at your finger tip to use.

1 Like

thanks, Volusiano, for your illuminating response.

How does this tech translate into helping hearing aids function better?

To answer my own question, based on Volusiano’s analogy above, I suppose that thousands of people with aids listening in restaurants would provide a larger data base for programmers to work with to incorporate into an AI program to choose between any particular restaurant environment that a wearer might find herself in.

So it comes down to still more capacity available on chips. Or is it that AI aids would “learn” how the listener/patient likes to hear in any particular environment? This last seems to be the promise of AI, that I think won’t actually come to pass. But maybe the “noisy environment/restaurant” settings will become better due to so called AI; i.e. a larger data base.

I edited my post a little bit more to add in at the end some remark on how AI can help remove the limitation of the capacity available on chips. So if you reread my post again, with the Federer analogy, hopefully you will see that the training using the massive amount of data that is “condensed” into the AI system gives it the “smart” that would eliminate the capacity required on chips issue of the conventional algorithmic software programming that can just grow and grow into more codes and more data.

To make it more plain, a newly design Deep Neural Network has very “dumb” coefficients and biases and weights assigned to its neurons and interconnections initially. But after millions of training cycles using millions of data sets, the DNN “refines” its initially very dumb and ineffective coefficients and biases and weights so that it eventually has much smarter and more experienced coefficients and weights and biases to allow it to perform at its best. The computing capacity required to store these smart “parameters” (the coefficients and biases and weights inside a DNN) is very small and even seem to be dwarfed compared to the “capacity required to be available on chips” for a conventional software based algorithm logic and rule based system.

OK, another analogy, back in the days, IBM put its supercomputer with tons of “capacity available on chips” as you put it, to play chess against a human chess champion. Naturally the brain of that human chess player is tiny, yet it can still hold its own against the huge arrays of IBM supercomputers. What AI technology can do now is to model the deep neural network of the human brain and train this DNN with millions of sets of data so the end result is the “smart” built into the AI. So now, instead of using a real human chess player to play against that IBM supercomputer, an AI DNN can be probably be designed with sufficient chess playing training to develop that “smart” that can probably replace that human chess player now. Again, it’s a simplified analogy, but hopefully it carries across the point.

ummm…no. the brain of a human, chess player or otherwise, is wildly more complex and capable than any computer. Proof?: Ask a computer if it’s alive or dead. Or if it loves the sunset. Sure, it will give a human programmed response. Do you really think that a computer sees, or loves, THIS sunset tonight? Of course not. that’s beyond its capacity.
do you really think any AI knows if it’s alive or dead? No.

All of this and much more is basic to human intelligence. Computers have no clue.

ummm… I think you totally missed the impression I’m trying to convey here in terms of size. I never said that the brain of the human chess player is “less complex” than the IBM supercomputers. I only said that the human chess player is “TINY” compared to the supercomputers. You incorrectly read my “tiny” into your “less complex”.

The point I was trying to convey in terms the “size” comparison is to contrast your “capacity on chips” argument, which implies a massive requirement of computing power and memory (hence the huge IBM supercomputer arrays in the analogy) in order to implement AI, against the DNN AI inside something like the Oticon hearing aid, which doesn’t require massive computing power and memory at all in its final implementation form.

Yes, it does require massive computing power and memory, and time as well for data collection and training, but only behind the scene in the lab, where massive computing power and memory and time consumption is not an issue at all. In its final form, it’s just a DNN AI with a bunch of smartly trained parameters stored inside the tiny chip that goes into the hearing aid. The “magic” is the properly and painstakingly derived parameters that make the whole network perform effectively for its assigned job.

Just like the human brain (albeit much much much simpler than the human brain), size and capacity is not the issue and limitation to become smarter in the DNN AI. It’s the training and experience that makes the DNN AI smart, just like the real life training and experience in life that makes the real human brain smart. That’s why a fully developed adult brain is smarter than an undeveloped baby’s brain. It’s all in the training and learning experience.

And of course we’re only talking about copying and modelling the data acquisition and training part here, we haven’t even touched any of the abstract things that can’t be copied yet like intuition, emotion, feeling, etc.

Excellent!

I would comment on the concept of “algorithm” versus “AI”.

As a person who spent his entire working career in computer design, I would caution that only thing that a digital computer can do is to process algorithms - nothing more, nothing less. So, in this sense, I think of AI as algorithmic. But rather than process using fixed algorithms AI can modify the algorithms recursively. So AI begins to appear as though it is “thinking” along the lines of what our minds do. So I would argue that AI is algorithmic because, at the bottom, computers do algorithms. But I get the notion that somehow AI seems a bit different, a bit “more” - ain’t it the truth!

(This raises the notion - and the fear, which is presently misplaced in my view - that AI can somehow take upon itself the agency of the human mind. A machine algorithm does not possess its own meaning, does not act for its own purposes, whereas all living organisms have their own purposes - at a minimum, for example, whether it be a bacteria or a human, the purpose is to survive and reproduce. In the end, so far as we know, an algorithm processor is machinery and nothing more. Science has no answer to how human agency differs from algorithmic machine thinking or even if, at the bottom, there is a difference - hang the claims of neuroscience, integrated information theory, et al, the mind-body explanatory gap remains as unfathomable as ever, the so-called “hard problem of consciousness” remains unsolved. So, it remains to be seen whether or not a present day digital computer, which is nothing more than an algorithm processor, can somehow cause human-like agency to emerge from ever more complex AI processing. In other words, at the bottom, are our minds machinery or something else? Philosophers of mind continue to have a field day with questions like this. And with onward rush of AI advancements the questions become a lot more relevant and a lot less abstract than they have historically been.)

1 Like

If we accept AI ‘is’ going to be present not as a benevolent dictator, but as a tool to tune the response in more complex situations.

In my current car there’s some AI functionality which controls the gearbox changes by ‘guessing’ when I’m approaching a corner by the speed, throttle position, g forces and the steering wheel position.

What I reckon is that most of the manufacturers need is a few more personally directed inputs into their ‘sound scene’ tuning to allow the AI to optimise itself on the fly. It’s not easy to know whether you need to focus on a certain directional pattern or noise field or not.

We’ve accepted personal assistants on other devices, but why not ‘Hey Starkey: Front Focus!’ Or ‘Hey Phonak: Restaurant Table!’ - They’ve already written the speech output in programs, why not use all that processing power and pattern matching to identify the incoming commands and tweak the AI on that basis.

1 Like

I don’t think there’s any argument that algorithms is the implementation format/tool in processor-based computing. And sure, recursive algorithms add more cleverness and power to the process. I’m sure there are other tricks as well.

But in the end, algorithm is simply still the implementation tool to implement ideas. Algorithm can be implemented in a brute force way (as computing power can accommodate this to certain degrees), and algorithm can be clever by using tricks, but the power is not in the algorithm, the power is in the idea that algorithm is used to implement that idea.

For example, without the advance in mathematic, you can use computer algorithm in a brute force way to calculate the area of a surface. But I bet you can use far, far fewer lines of code to write an algorithm that implements the concepts in calculus to calculate a surface area of something. And the fewer lines of code also results in faster computing time.

So in that sense, yeah, AI is the something different alright. It’s the idea, the science and theory of how to model intelligence, and how to boil it down to something easy enough to implement on computers efficiently and effectively. DNN as a subset of AI is a new science in modelling the human neural network mathematically. This way AI is a whole new different thing (the “science”) in itself. Computer-based algorithm is just a tool used to implement the science behind AI. Computer-based algorithm is not THE SCIENCE of AI.

Therefore, “computing capacity available on chips” is not necessarily a requirement for AI, unless the AI approach is a brute force approach and not a mathematically scientific approach.