Ah yes- we should probably distinguish between “indirect” and “direct” AI…
The former would be using AI to improve things like when and how AutoSense switches programs, and programming a smarter, flexible filtering of audio inputs.
“Direct” (real-time) AI would be much fancier (and computing-heavy), like sorting out multiple conversations at a cocktail party and, say, project separate, colored subtitles on the smart glasses you wear. The latter requires imho a real-world understanding of what is being said (this I still suspect is a brain-, not a proximate sensorial activity). With those applications we are also in the realm of Musk’s brain-implanted chips…
You can make the analogy with how AI improved chess and Go-playing software. In the early days this was written, step-by-step, by human programmers. These days the software is generated by AI using Darwinian trial and error, producing code that is (far) superior, but nobody knows how it works. When such software runs, you can argue whether AI is (still) at work or not.