I recently came across a TED Talk by Jason Rugolo, founder of the startup iyO, and I wanted to share some thoughts and see what this community makes of it.
In the talk, Rugolo explains how his team (with roots in Google X) developed the iyO ONE — a screenless, AI-powered audio computer worn in the ear. What struck me was their focus on making the device comfortable enough to wear all day, and ensuring that the sound reproduction was as natural as possible, rather than artificial or compressed, as many users experience with conventional hearing aids.
One demo showed how the device can lock onto a single voice in a noisy environment (eg. acrowded restaurant) — and translate that person’s speech in real-time, directly into your ears. This kind of speaker-targeted hearing and live translation feels like something right out of science fiction — but iyO is now taking preorders for the first version of their product.
They don’t seem to market it explicitly as a medical hearing aid, but it certainly seems to offer enhanced hearing as a core feature, alongside language tools and ambient computing. It’s a fascinating hybrid between assistive tech and consumer electronics.
What do audiologists or long-time HA users here think about this direction?
Do you see this as a possible competitor or complement to traditional hearing aids?
Any thoughts on potential risks, limitations, or hype vs reality?
I think I first posted a link to Project Wolverine and that spun out of Google to iyo. It kind of looks like a really common scenario these days. Develop a product to the point where it looks interesting to a bigger fish and hope for that big payday. You’d think we would have seen some product by now? It’s been a while. As for something with AI that sits in your ear? Sure, why not. I’d want to see something better than a slick TedX talk though before I got too excited.
Fair point on the TED Talk being from last year. But the legal fight isn’t just about the trademark — it’s also tied to OpenAI’s apparent plans to build a very similar product. Rumors are that OpenAI was trying to enter the same “AI-in-the-ear” space after iyO pitched their device to OpenAI, which adds another layer to this story.
And with shipments of the iyO ONE set for September this year, I wouldn’t say there’s nothing tangible happening.
What really caught my attention in Rugolo’s talk was how, in trying to create a great hearing experience, they had to deeply study hearing science — and ended up building exactly the kind of tech I wish traditional hearing aid companies were working on. Natural sound, voice isolation, real-time translation — all things many of us have been hoping for.
What do you think? Are these features likely to show up in hearing aids anytime soon?
Yeah, it does follow the typical startup-to-big-fish path — and I get the skepticism. It’s fair to expect more than a TED Talk by now, though they are still aiming for September shipping, so we’ll see.
What interests me most is how their approach came from solving for natural sound and comfort first, not just amplifying hearing. Even if iyO doesn’t deliver, I hope it pushes the hearing aid industry toward features like voice isolation, real-time translation, and better all-day usability.
Would be great to hear if any of that is actually in the pipeline from the major HA brands.
If you’re not a bot or you’re just trying to create traffic/interest in a third party idea/launch: a cursory look at the pages on this forum would tell you there’s several conventional hearing aid firms and many other people training AI models to do just this.
Just curious and haven’t found other related topics yet, but will have a look then. Would have helped to link one of those posts, instead of accusing me of being a bot mate.
I don’t think you are a bot, it’s just that I no longer get excited when a Silicon Valley startup announces a groundbreaking new product.
This company will probably come out with something, but it won’t be nearly as good as the TED talk.
It will also flop because a) nobody wants to walk around with heavy disks sticking out from their ears and b) nobody wants to exclusively use speech as the human-machine interface. Even if it works, there is no privacy.
I’m skeptical and interested at the same time. Yes, I’d wear one for short periods of time and in certain situations if it worked well. ‘Hey, ear thing, mute anything that sounds like Donald Trump’. Wouldn’t that be nice. Sorry. I’ll stop now.
Sorry for the skepticism, but people pop up on here fairly often claiming that they’ve reinvented the wheel without apparently having undertaken even a cursory search of the current market. The ‘holy grail’ of the industry (for the last 30 years plus) is basically to put AI into a small package to enable the lifting of speech from background noise, mate.
if @rfv wants to talk about iyO ONE, I say let 'em talk about iyO ONE. Why not? We spend endless hours poring over press releases from Demant, Sonova and the rest of them, with their marketing fluff, unsubstantiated reports from their own testing of some weirdly specific percentage improvement in some obscure metric.
If it’s the video I saw, they’re saying that you can (for example) tell the device to mute the loud table to your right. That goes way beyond the DNN denoising we’ve seen. It may turn out to be wild exaggeration like so many claims from the hearing industry. That remains to be seen but in the meantime it’s germain to our common interest.
If they’re claiming to ship in September, you’d think there would be some units in people’s ears now?
From what the OP wrote, it sounds like this company is trying to do something different than traditional hearing aid manufacturers. Instead of amplifying someone’s voice, the AI will attempt to understand it and then generate its own (clear) version of what was said for you. I think it’s an interesting idea, although I’m sure that even if this company succeeds it will take several generations of products before it works well and is affordable.
It’s the Babelfish idea in the long version; especially the simultaneous translation. I get it.
Not sure having an AI voice delivering speech at you for normal conversations is actually beneficial over extracting real voices from the background though.
My ability to understand someone’s speech varies widely depending on factors such as pitch, loudness, clarity, speed and accents. I can see how a generated voice that has been customized for my hearing loss could improve my speech understanding. The downside will be that everybody will sound the same
Thanks. That was the part I was intrigued about and how this AI company just “solved” the issues we have with hearing aids just a byproduct of their process of developing the “the next big thing”. I don’t care about iyO and their device in particular, it was rather the developements they showcased in their presentation, which to me looked very promising that in future we might be able to have close to perfect hearing and understanding again.
Thanks @d_Wooluf “That goes way beyond the DNN denoising we’ve seen.” ← and this is what striked me the most…and pictured a vision of a (hearing) future I would be excited about. To be able to hear again, and maybe even better then ever.
Thanks for sharing @Bimodal_user! Let’s see how fast we get those features. In my case, I don’t need directions in my hearing aids etc…What I want is speech clarity. And what I saw in the iyO presentation and in this Phonak video seems promising that we will reach new levels of understanding voices very soon.