Phonak Audéo Sphere

So the HF3 HF4 are for the custom’s, and Cerustop for the RIC, which is better, I donno but does it matter.

1 Like

Let me get that straight, this AI thing on the Sphere works for only 3 hours, or you can manually activate it, but your hearing aids will only lasts for 6 hours!!! that sucks.

Also, their DNN is using 22 millions sounds (I mean still better than Oticon’s 12 millions); the questions then is, what kind of sounds and what kind of languages are we talking about?
I mean, can they detect an Irish sub-language or a South African English!!! not to mention other languages around the globe.

I suppose these are already fixed inside the chip, which makes them locked, unless they update the firmware with moar DNN data…

I think this tech isn’t ready for prime time, we are being taken for cobaye! as usual.

1 Like

The AI chip uses quite a bit of power but Phonak has compensated for this by putting in a much larger battery. Apparently, with no use of the AI noise reduction program, the battery lasts about 24 hours so you basically have about 12 hours of regular use + 3 hours of AI noise reduction use = 15 hours per day of battery if you max out the AI noise reduction use at 3 hours per day. I only anticipate using the AI noise reduction a few days a week (mostly on the weekend) so 24 hours of battery a day during the working week is a big improvement over Lumity (if real world use proves this to be true).

My audiologist has ordered me a set Sphere hearing aids for trial and I have an appointment to have them fitted on Sept 9. I will post comments shortly thereafter.

Jordan

6 Likes

Cool man, looking forward for your review.

1 Like

he he, your not alone in thinking this, I mean right now I can’t get that Phonak video outta my head, Jim posted it earlier on, just too good to be true…but as usual we await the arrival of reviews from our fellow forum members.

3 Likes

Don’t get me wrong, I absolutely praise their effort in developing the tech that enables “US” the consumer in hearing better, but for the money and the shattered expectations, this is just a small step, not a giant leap as they say.

Me thinking, is this going to be wireless and always connected (IoT), meaning, if the chip isn’t going to hold say, trillions of data (DNN), then the HA could be using the internet to tap in (say) Sonova’s servers (the cloud) to analyze the sounds, then back to the HA for an AI output, similar to how some subscription program work nowadays, which would make sense?

But this will open the door for other issues, like subscription and wifi in the HA, also, where is the ethic in this mix?

I don’t thinks this is far fetched, but I could be wong.

2 Likes

Maybe it is naturally confusing, but there are two new lines of receivers coming out, one line (w/ 3 pins) for the legacy HAs, and another new line (w/ 8 pins) for the infinios.

WH

1 Like

No, this excludes the MAVs. These are for the legacy HAs (M/P/L) with 3pin receivers. Going back to the old wax traps.

WH

I heard that Oticon DNN 1.0 has 12 milions, but Oticon DNN 2.0 in Intents already has 50 mlns. @Volusiano and @cvkemp surely know better about it.

Waw, 50 millions dallars, Cmon man, I bet the Inuits have more than 50 millions words/sounds and whatnot.

I wait for their PDF about newest SDS 6.0 and their potential. Still somethat disappointed about no xMEMS technology, but reportedly it is very power-consuming…

1 Like

I think this is a situation similar to the ISTS speech signal in REM. This signal is not in any real language, but it is still useful.

In addition, I would not be surprised if, for DNN training, the input in the form of myriad unwanted background noise is more important than a particular-language speech signal.

Ah, GOT IT! (ooops, and 20-odd more characters to sound OLD, and LONG-WINDED)

1 Like

That’s a load of BS, pretty much all newer Android phones are BLE-A enabled, same for TVs from major manufacturers.

1 Like

All very confusing, and I am not sure I am following the conversation, however there are MAV receivers (presumably 8 pin for the infinios

The Oticon More DNN 1.0 uses 12 millions sound scene samples to train the DNN. The Oticon Intent DNN 2.0 does not reveal how many sound scene samples it uses specifically. It just said that the sound samples used for the DNN 2.0 are more diverse than those used in the DNN 1.0.

My personal guess is that Oticon did not retrain its DNN 2.0 from scratch. They probably took the DNN 1.0 as is, then add more diverse and complex sound samples in to train the DNN 1.0 into the DNN 2.0 with these more diverse and complex sound scenes.

Below is what the Intent whitepaper says:

image

I appreciate your input, but you are absolutely way off the sign, not sure where did I say Siri, also, I suggest you re-read my comment.

Not sure, about your statement, unless:
1- You work for all HA manufacturer or have an insider information
2- You have a crystal ball, you can see the future

In any cases, it is an option for HA manufacturer to reflect about.

Not sure why this statement, I will leave it at this, but bear in mind, if you have good things to say, that’s fine, otherwise, refrain from calling people falls information spreader. AfHf

Itbis all about what works for the individual. My INTENT1 aids have given my life back literally. And my INTENT1 aids batteries last 22+ hours with up to 5 hours of streaming audiobooks, music and calls. I have been wearing Oticon aids for 14 years and I am absolutely loving the Oticon sound.

2 Likes

@nope, my career was in systems programming, and I’ve read books on digital hearing aid internals, digital signal processing, and machine learning. Your explanation of how Phonak uses AI seems plausible to me. But as for that YouTube video, I think there’s no way in the world that a non-technical person is going to extrapolate from there to how hearing aids use a DNN, if they understand the video at all.

2 Likes

Your comment isn’t so far fetched from Phonak marketing vision about times near 2030…

Anyway, properly trained DNN and NPU do not need to upload data to, for example, Phonak servers.

2 Likes