Phonak Audéo Sphere

In the video, Dr. Cliff said that the receivers do not yet have MEMS technology of solid state silicon speakers, so it is not possible to get amplification above 8000hz.

Can someone explain this more?? Who is here at a loss for lack of it? Who needs it anymore?

Upotrijebio je riječ “prijemnici još nemaju” znači li to da će uskoro doći? pa bi novi prijemnici trebali dolaziti s tom tehnologijom?

Phonak SDS 6.0 receivers are Sensor receivers used by the Audeo P-R Fit and Audeo L-R Fit.

3 Likes

I’m going to assume those are lithium batteries in the Spheres.

The killer of batteries is heat. Throwing on a charger while internally warm is a battery life killer.

Just a reminder about battery care.

So, as previously suggested, this isn’t “AI”, but just another set of algorithms? It does, however, appear to be a good set though.

Peter

This is not correct; Phonak is certainly advertising both “Auracast ready” and I’ve heard directly from my phonak contact that Bluetooth LE Audio is on the menu.
What’s not clear is whether it will become available in a firmware update, or whether it will ship with it initially. I meet with my contact tomorrow for a beer so I’ll ask more

2 Likes
  • The normal human hearing range is 20 Hz - 20,000 Hz, however hearing aids for the longest time have only really focused on amplifying the speech range (125 - 8000 Hz), and this range is even less for higher powered devices.
  • Recent technological advances in hearing aid receiver design now make it possible to amplify higher than 8000 Hz.
  • There are arguments that amplifying higher frequencies are especially important in pediatric fittings, where a child needs to clearly hear the s and th sounds, so they can properly learn and reproduce speech. Having receivers that have extended high frequency range are therefore quite important with that regard.
  • there also may be benefits to perception of music, for perception of vertical directionality, or if you’re fitting hearing aids to dogs and cats :stuck_out_tongue_closed_eyes:
  • All i know is that in my years of fitting hearing aids, there has not been a time where increasing gain in the highest frequency channels has ever sparked joy from the wearer EXCEPT FOR a young violinist when tuning her music listening program.
3 Likes

I’ve got to agree with TheGoatLord here.

You don’t have to be an insider to know that cloud processing won’t be a thing in hearing aids. There are a number of reasons including:

  • the additional battery drain from Wi-Fi based transmission of audio
  • the additional latency (delay) in receiving the signal, processing in the cloud and then redelivering the signal.
  • Whilst manufacturers like Phonak have multi-modal 2.4 GHz antennae which manage different wireless protocols, adding Wi-Fi to the mix creates a whole new technical challenge.

There certainly is a use case for cloud based AI processing - for eg. Signia uses a cloud based DNN for their tech support app, however in this scenario it’s fine because it is not delivering audio and the delay in response is not important.

I’m not saying it’s impossible for subscription based features to emerge, especially considering for years the difference between the top level and entry level hearing aids have been software based.
However AI processing simply has too much power consumption and battery drain for it to be a reality. Perhaps as smartphones become more powerful, a hearing aid could offload processing there, however it would still be loaded with potential issues which is why edge based AI processing will always be preferred.

I certainly appreciate it, being a lead guitar player, often playing in the higher registers. On my many experiments of self-programming my Phonaks, I did a new install, and went to a band rehearsal.

What I didn’t realise, was that Target had enabled SoundRecover2 as default. The sound coming out of my amplifier sounded absolutely awful, to the point I asked the others, who said it sounded fine. I removed my HAs for the remainder of the rehearsal!

My loss is around 70-80dB in the higher frequencies, so I’ve found I don’t need SoundRecover2 (yet). However, my Naida P30s don’t allow much gain around the 4K mark, which I find confusing.

Speech isn’t everying to me, in a hearing aid.
Peter

1 Like

There are tinnitus sufferers which need high frequencies [beyond 8kHz] to mask it. Being one of them I went with Signias which offer ~10-12kHz.
8kHz is just lazy and inconsiderate at his point.

I’ve been speculating that if Bluetooth LE or future improvements have sufficiently low latency, the processing power in one’s phone which is many times greater than the HA chip could be harnessed in ways that are way ahead of what is currently feasible.

1 Like

It changes a bit with dedicated NPUs in recent modern devices and I’m certain that Phonak is cooking remote mics with AI+NPUs inside.
You basically need a Phone with a decent array of mics and NPU sound processing. Sound processing AIs aren’t particularly heavy. Latencies aren’t an issue when properly done and transmitted over BLE-A.
Actually I’m expecting something like this to popup in the Pixel range soon, since it has both NPU, offline LMMs and Google’s accessibility services like Live Transcribe and Sound Amplifier.

2 Likes

You’ve overlooked the biggest issue - for the feature to work you need a phone.

  • It needs to be within Bluetooth range
  • It needs to have sufficient charge
  • It needs to be able to prioritize the hearing aid DSP
  • Phone based processing needs app upkeep

I’m not saying it’s not doable, but having the chip on the hearing aid is much more preferable as Phonak (or other manufacturer) can control all of the variables, and not have to trust the phone will work properly.

2 Likes

Thanks, I’ll look at the details there. why is vented dome only mentioned for SDS 6.0? and what about closed domes as well as Power dome?

What I meant is that we already have the hardware, in the form of a recent smartphones. There are actually generic white-label smartphones which get repurposed as feature devices, music players and so on. They’re basically just a compute units with a screen. Assistive AI mic would be just one of the possible variants/implementations.
Which just reminded me that I’ve seen one project recently doing just that.

Thank you. you explained everything to me in detail :), means that this option is unnecessary for me because I won’t be able to use it.

Are the hearing aids the same I think if you were to compare the possibilities for severe hearing loss., I am currently using Phonak Audeo Paradise and for me it is close to the upper limit in high frequencies. Are Paradise and Sphere equal in terms of sound amplification?

Yes 105dB is the max it’ll go too. All Audeo from Phonak have a limit of 105dB.

2 Likes

Wouldn’t think so. It’s a round trip, so maybe 50s would be the best you could hope for, probably more. I think LC3+ (not included in the spec) might be lower latency with a bit of tweaking.

Maybe UWB in the future?

I popped into the audiologist next door to my daughter’s dental appointment and asked the receptionist about the Sphere. The audiologists heard me, and came out to have a chat. Seriously, she’d have been less excited about the second coming. They’d been to a Phonak information session/demo and came out convinced.

I asked about pricing and her guess was $13,000. My Philips 9040 at Costco cost me $2,000 Australian dollars. That cool aid is going to take a bit of swallowing.

2 Likes

As you can see from my audiogram, I absolutely need SoundRecover2.

I was wondering (perhaps foolishly): since the Sphere with UP receiver has a frequency range up to 8K while my P90s reach 6K, is there a way to disable SoundRecover2? Or, since both the Sphere and the P90 have a limit of 105db, will I always need SoundRecover2?

SoundRecover2 is a checkbox in Target. How to programme it is a mystery to me, but I’d certainly say it can be disabled.

1 Like