Phonak Audéo Sphere

When you say AI noise reduction use, do you have to manually turn it on, or can you let it decide auto?

Thanks for feedback! I’ll be getting my Spheres to trial in about 2 weeks, too.

I guess I’m a little more skeptical of how rapidly this is going to develop. I think the main way to deal with the power issue is going to be using a smaller process size. Hearing aid manufacturers have tended to use much bigger process sizes than smartphones, presumably because of cost. Finding process size info on hearing aids is challenging, but Phonak was groundbreaking with their Sword Chip when they lowered it to 28nM from 65nM. At the time, phone process size was around 5-7nM. From what I gather, it is tremendously expensive to develop chips on very small processes. Smart Phone manufacturers deal in enough volume to make it profitable. I’m not sure about HA manufacturers. Hope I’m wrong.

3 Likes

I am guessing the accelerometer is key for the algorithms to guess/anticipate that you are trying to listen to a different target, and adjust who/what direction to amplify. I don’t think they’ll be dumping that, they’d rather continue to improve the utilization of the accelerometer, otherwise they are just putting all your eggs on battery life & streaming. You might aswell just get disposable battery hearing aids at that point, because I think the algorithms will lose their efficacy to the point that it isn’t useful.

So I see there is a 3 hour AI limit, which is probably fine, since we likely don’t find ourselves in a very loud environment requiring good speech clarity for much more than that.

Just wondering how much battery life this hearing aid has after you use up those 3 hours? Lets assume usual autosensing & listening & occasional loud environment is taking place, minus streaming?

Now how about you use 3 hours of AI, and you’re streaming a lot. How much can you stream before the battery dies?

Those are the big questions for me. I don’t mind charging every night, I am in the habit of daily charging my phone and compilot anyways.

Somewhere in this thread it was mentioned you don’t have to limit to 3 hours and the battery life was mentioned 12 hours I think.

I heard that Automatic Spheric Speech Clarity kicks in when the noise level is > 69 dB.

I am disappointed that Phonak did not provide such information about when a particular program from AutoSense OS is switched on.

See other two topics:

and:

3 Likes

That’s very helpful thanks.

I’ve never used myPhonak app, I don’t have a new enough HA.

I am wondering if the app allows me to program enable/disable sphere by tapping button on HAs 2 or 3 times or something like that? Autosense works wonders enough for me 80-90% of noisy situations, so I might just be selective with AI when autosense isn’t sufficient.

I’ll know in 2 weeks! Haha

How do you mean? Lumity speech in loud noise program also kicks in at 69 dB at either ear. I think in the paradise it had to be 69 in both ears and back in the venture audeo I think it was higher at 73 iirc.

Is anyone wearing these devices?

Dr. Cliff saying there is a shortage of Phonak Sphere Infinio

This is true, but the good news is that smartphone manufacturers’ demand for leading edge nodes is what makes it (eventually) possible for HA manufacturers to gain access to smaller process sizes. In other words, once TSMC spends tens of billions of dollars to create a 5nm chip fab, they will keep that fab running as long as possible, even after smartphones have moved onto smaller processes.

I don’t know what the Deepsonic chip is in terms of process size, but I expect that HA manufacturers will continue to benefit from process shrinks, even if they are 5+ years behind what smartphones are using

3 Likes

I’m guessing that in 5 years or so, the 3 hour limit for the Deepsonic chip will appear laughable. I think the big jump for AI in hearing aids will likely come when we can have the AI chip running 100% of the time. But that said, most HAs work pretty well in quiet environments right now, so limiting the AI processing to only very loud situations is probably going to give people at least a significant portion of the benefit that we’d see if AI were running 100% of the time.

2 Likes

Has anybody tested the sphere mode while in a car?

For me, car is the most challenging environment as I always hear a lot of noise at high speeds and voice gets masked by the engine/wheels/wind.

If the sphere mode is able to isolate voice not only in noisy environments like restaurants but also in cars, it would be a game changer for me.
For example, the Roger On helps me only in restaurant but it’s not effective in the car…,

I assume that in the automatic there will be no sphere, but you can set a second programm with the sphere and use this in the car, should work but I have not yet tested them.

BTW this 3h limit is only on automatic, this to ensure that the hearing aids will run a “whole” day.

Ask @AudioBlip and @pproducts.

What acoustic coupling you have? I you have dome or earmold with big vent - efficiency of Roger and AI decreases, sometimes dramatically.

Ten years ago, I was not aware that the efficiency of DynamicFM depends on how large the diameter is in my earmold. Smartlink+ was completely useless apart from presenter mode.

AutoSense OS 6.0 do has Spheric Speech Clarity. You can also set it as manual program.

I read that as “while in the car”

WH

1 Like

I have moulds with a small (1mm) vent so I should benefit greatly from Roger On… and I do at restaurants. By car is another story…,

I mean that information should be easily accesible on Phonak website in PDFs, such as:

"In AutoSense OS, following modes kick in, when:

  1. Calm situation - [conditions with dB numbers]
  2. Speech in Noise - [condition: noise: x dB; speech signal: y db]

etc., etc."

Totally agree. I just got a pair of the Spheres today on a trial and it seems like the AI mode kicks in once you hit a certain noise threshold. So it’s definitely not on all the time (and that made me realize the 3-4 hours you get with it becomes super usable) but when it does kick in, it’s like you have a mic on that person and its going straight to your ears. I can’t wait to see how good this gets in the next 5-10 years.

4 Likes

Ah, hm. Given that the automatic program has been trained using AI for generations, would we expect them to have all of those parameters? Or are some of them a bit black box?

2 Likes

Indeed, yeah, you’re right. I completely forgot about training. I thought about AutoSense OS as an block diagram (simple “if X… so do Y”).

However AI:

What acoustic coupling do you have, and how large is the vent?

1 Like