Artificial Intelligence AMA with Senior Phonak Experts

This week’s virtual Ask Me Anything (AMA) with Phonak and HearAdvisor will take place on Tuesday the 24th of June at 10AM CDT. Please mark your calendars and be ready with any questions you have. You will have REAL-TIME access to the following individuals:

Christine Jones - VP Audiology Global Audiological Care (Sonova) & Sr Dir Marketing US (Phonak)

Stefan Launer - VP Audiology & Health Innovation (Sonova)

Steve Taddei - Co-Founder and Lab Director (HearAdvisor) & Senior Contributor (HearingTracker)


Here’s how the AMA will work:

  • Participation: The AMA will be hosted right here on the forum. Simply log in at the scheduled time to join the conversation. The thread will be unlocked tomorrow at 10AM CDT.
  • Asking Questions: When the AMA goes live, you’ll be able to submit your questions directly in the discussion thread. Feel free to ask about hearing technology, audiology insights, product innovations, or anything else you’re curious about.
  • Real-Time Responses: Christine, Stefan, and Steve will be answering questions in real-time during the AMA window. Keep refreshing the page to see new responses as they come in.
  • Engagement: You can also like posts, reply to others’ questions, and engage in side discussions. Your interaction helps create a dynamic and informative session for everyone involved.

Let me know if you have any questions in the meantime!

P.S. This thread has been cleared out to make way for the AMA. If you’re looking for your previous contribution to the planning thread see this history document (841.3 KB)

5 Likes

Hi everyone!

We’re excited to welcome you to a special Ask Me Anything (AMA) focused on artificial intelligence in hearing aids.

Joining us today are two senior leaders from Phonak and Sonova:

Christine Jones, AuD – Vice President of Audiology at Phonak US
Stefan Launer, PhD – Vice President of Audiology and Health Innovation at Sonova AG

Both Dr. Christine and Stefan bring decades of experience in hearing science and innovation, and they’ve played pivotal roles in the development of Sphere, Phonak’s latest hearing aid platform, which uses artificial intelligence.

Whether you’re curious about how AI is being used in hearing aids, how neural networks work, or what makes the dual-chip design of Sphere unique, this is your chance to ask the experts directly.

We’ll be kicking things off with some starter questions, but we encourage you to jump in and ask anything—from the science behind the tech to how it might impact your everyday hearing experience.

  • What is a deep neural network (DNN), and how does it apply to hearing aids?
  • How is AI currently being implemented in hearing aids?

Drop your questions below and we’ll begin responding shortly! Thanks for joining us—we’re looking forward to a great conversation!

– Steve Taddei, AuD (Host & Moderator)

2 Likes

Thanks for doing this - it’s great to see a manufacturer directly engaging with end customers!

1 Like

Why don’t you have any NAIDA UP CROS Units in the BTE configuration available? You discontinued the Paradise right before I was ready to purchase, and there are no replacements. My Phonak Naida UP B90 plus UP CROS BTE (not RIC) just died, and there isn’t an equivalent buyable model yet. When is this coming out? You are leaving me in limbo here. Is anything like this coming out for the Infinio Sphere?

1 Like

First question from me:

According to my cursory understanding of AI, there are three elements that have an effect on the quality of the resulting AI:

  • The quality and amount of the input training data
  • The quality of the model derived from the training data
  • The performance of the AI chip on the hearing aid. Better chips can run more complex models.

Looking forward, where do you see the most potential for future improvements? Where does Phonak invest the most resources internally?

2 Likes

Welcome, thanks for joining us.

I second @wtolkien
Thank you all for doing this on this forum and I hope we will see you all again.
:blush::smiling_face_with_three_hearts::heart_eyes::star_struck::kissing_heart:

Can you discuss current process size used in Sphere chips and if you see shrinking process size as having much potential to improve performance?

4 Likes

I have a few questions - 1) We hear a lot about AI from all of the hearing aid companies and it is hard to know what is real and what is marketing spin. Are competitors doing the same thing as phonak? 2) Why do some people seem to not get benefits from the speech in loud noise function while others do (Even if using closed domes). Is the shortcoming on audiologists not knowing how to use it? (maybe this is a question for Christine - training of audiologists?) 3) size and battery life of the Sphere seems to be an issue for some people - when can that be improved?

1 Like

I agree with @hearingaidobserver1’s comments.

Phonak is known for many good features, but battery life is not one of them. The Spheres perform better thanks to a larger battery which makes them much bigger than competing HAs. For me, as a glasses wearer, the large size is a dealbreaker. Are you going to address this?

For example, SWORD in Marvels was in 40 nm technology:

1 Like

Thank you for your interest in CROS. We do have a new Audeo Infinio CROS. This product is everything you loved about Phonak CROS now powered by the Infinio platofrm including universal connectivity, increased battery life and AutoSense OS 3.0

  • Dr. Christine
1 Like

We are having some technical issues, so I will be following-up for our guests. Thanks for your patience.

1 Like

Does DEEPSONIC chip processing power can be showed in TOPS?

Thank you! You only answered half of my question. I need the BTE version. Not RIC. Please clarify when the Naida UP (Ultra Power level) BTE version is coming out with UP CROS Unit. Thank you.

1 Like

I’ve a pair of Phonak Sphere HAs and I’ve not gained much benefit for understanding speech in background noise. Working with the audiologist I’ve changed from open to closed domes and have tried various settings programmed using real ear measurement. Having had tinnitus for 40+ years, I’d be interested to know if Phonak’s AI system can help with understanding speech in noise. It’d be wonderful if the system can help to a similar extent to that demonstrated with environmental background noise in this Phonak Spheric
Speech simulation: https://youtu.be/jcd3jKE9Oxs I’d be extremely grateful for any advice that a Phonak expert could provide.

AI investment: Sonova is investing and driving work across various domains of AI research and development, it always requires excellent AI models and algorithms and for them to perform well we need sufficient and good training data as well as the computational power to execute the models in a hearing aid. So future innovations will always continue along all these lines to be successfull. Specifically, we are working on how to apply AI to more different acoustic scenes, more listening environment than we have today. Also, technically, to make the computational part more efficient to consume less power and memory.

  • Dr. Launer

But will a version of Naida of Sphere be released?

Rechargeable battery capacity is an issue for a lot of people. Seems like Sphere has met most people’s needs by going large. How much complexity would it add to have different sized models with different sized rechargeables depending on person’s needs/wants for longevity vs compactness?