Phonak Audéo Sphere

When we listen to the sound streamed by bluetooth, it is mostly pure sound that does not go through sound processing as usual, I doubt that AI will do anything then.


Hypothetically, I would put forward an idea. When I listen to music, I usually don’t know what they are singing without the lyrics. I hear some words but not all. it all depends on how intelligible the voice is when they sing.
I would like an option, so that we can listen to music with AI processing, at least at the beginning to hear what they are singing, only to learn to listen to the words and then turn off that option and listen naturally. Because surely the AI ​​processing muffles the music, the bass, so the song is not that interesting, I mentioned it as an example of what I would like to learn to listen to music without Lyrics.

Another option - when we listen to some YouTube shows, reports and various analysts. Then the music is not important to us, the intelligibility of the speech is important.

Are you in favor of it, and would there even be a possibility for phonak or other manufacturers to enable us to listen to bluetooth streaming with AI processing in order to better understand what is being said or what they are singing.

I think the latest generations of smartphones have their own NPU that could do this without involving HA in AI processing. For example, the Samsung Galaxy S23 Ultra has a NPU with 26 TOPS. Next generation - S24 Ultra - 34 TOPS.

Nevertheless, the described built-in HA AI function can be useful when listening to non-streamed music, e.g. outdoors.

1 Like

Wow, there will be SDS 6.0. I wonder about the differences between it and SDS 5.0 and 4.0. I am hoping for wider eligibility for MAV (ActiveVent) receivers.

Connectivity: Ultra-responsive ERA™ chip at the heart of Infinio | Audiology Blog (phonakpro.com)

Absolutely, yes this is a requirement right now for those in the severe losses, plus I do hope the Bluetooth stability is greatly improved, as it’s a bug bear for a lot of people.

You want more occlusion in your molds when streaming? You want feedback from vent when not? This seems like a hard engineering problem.

WH

What? Why would someone with a severe loss want/need a MAV?

3 Likes

Overall, yes. Yesterday I tried blocking the vent with a piece of paper, and indeed the quality of the streaming became excellent, especially in the lower pitches. The streaming remained fully understandable, even I turned up to the max extractor hood in kitchen.

That’s because the closed position gives full benefit from directionality, streaming, or denoising features. However, when I hear low pitches very well (I can talk on the phone without a hearing aid), they are fuller and better in a more open setting.

I have AOV, which is simply tube with 2,5 mm diameter which is strange, because previous cShell (with the same audiogram) had quite unconventional shape of venting.

I should create a new topic about this to avoid further confusion about Sphere topic.

What needs to be done to enable this feature

(For other users) background, read quote below:

More powerful and energy-efficient chips need to be invented because AI denoising drains the battery a lot. Perhaps better battery technology is needed as well, but that is just speculation.

1 Like

occlusion, I was thinking, adjustable vent on the fly?

1 Like

Absolutely no adjustable. Just 2,5mm diameter tube drilled in cShellI was simply curious about occlusion with the new cShell, so I inserted rolled tissue paper into the vent and later easily remove it during listening and comparison.


I hadn’t looked at their loss. They said “severe” but their lows are basically normal.

1 Like

Why can’t they put powerful battery intense AI in a Roger device? Like the “HeardThat” app (on iOS but that app has a standard Bluetooth delay, making it unusable.)

2 Likes

I’m paying 3800$ through DirectHearing

1 Like

@user34
I notice a slight latency with my Roger ALDs. This is definitely less noticeable since I’ve reduced the bass frequencies in the Roger fitting formula. I definitely see them using their AI chip in future Roger devices, as this will open up non-Phonak customers to them. This assumes, of course, that it really is as groundbreaking as they claim.

Peter

same price at directhearing, hopefully have mine in a few weeks(the 90’s)

Did you guys check the link given by @jim_lewis much earlier on in the thread.

Ya know I just don’t believe its gonna be like that at all, it’s a bit like “too good to be true” scenario, there’s just nothing at all on the market that is anywhere near what that video pushes out, amazing really,but a true setting would be our local pub, 50- 60 people going hard out yakking would get Mr AI Smarts confused for sure… maybe.

I paid €2.975,95 (3.268,35 usd) at Wholesale Hearing. Expected delivery first week of September.

My very very thoughts. My local pub is a really good benchmark. I have to test my programming regularly, so have to “suffer” that environment for the benefit of science! I hope you all appreciate my sacrifice for the greater good :rofl: :beers:

This pub has high ceilings, multiple juke box speakers, a pool playing area with hard wooden flooring, and at least one local who has a voice like a fog horn (you could even hear him talking over our rock band - the reason why I’m here).

I’d like to think the AI chip might want to mute him, but I doubt it would. How, for example, would it know whether I’m talking to him, or not?

On another note, those videos remind my of my P90 fitting. With my badly programmed M70s, they sounded like the first video. My Audiologist took me outside after programming the P90s. There was a loud truck stationary, at the lights, engine clanking away. I was facing away from him listening to the wind in the trees opposite. He then asked me, what I had for breakfast. I heard him as clear as the 2nd video. Hindsight tells me he did this by boosting G50 and reducing G80. Time will tell whether Sphere is any different to decent programming. I believe it will be perfect for some, but annoying for others.

Peter

1 Like

I am very much looking forward to your beer test. All examples for noise filtering I have seen involved traffic or other machinery, and I remain very sceptical that AI will be able to single out a single voice from a cackling crowd. The way we perform this trick is by understanding what is being said, and upfront guesses of where the speaker is going. This uses our brains, and does not happen at the sensory level. AI will only ever able to mimic this by a similar understanding of what is being said (using real-world knowledge), then projecting for instance subtitles, or completely replacing the input with an artificial voice.

1 Like