Phonak Audéo Sphere

Thanks for (re-) forwarding the link, that article looks indeed very impressive. In the end, hearing is believing…

5 Likes

LE audio is completely new and has nothing to do with mfi or Asha.

WH

2 Likes

Hearing is believing, for sure this. My colleagues who got to listen to it were blown away, but they were also in a controlled lab environment. I’m looking forward to finding out how it sounds out in the world. We’ll know in a few weeks. AI is everywhere for sure, but denoising is a realistic application that has already been successful in bigger systems, compared to other industries where AI is just a weird gloss. But yes, hearing is believing.

I disagree. Good hearing aid connectivity means better hearing on the phone and in meetings. That’s really valuable for a lot of people.

But I agree with user34, too. The classic Bluetooth has been solid. It’s good enough for now. Of course we all want everything all at once, but given the choice I’ll take big gains in noise and keep puttering along with classic Bluetooth for a while.

11 Likes

The problem is that le audio is new to most devices and will take some time to iron out the bugs. One thing that is needed is hard regulations set between phone manufacturers and hearing aid companies. This is going to be very hard due to the difference in size and power requirement.
If though i have a Samsung phone and really like it Google is the only real choice for hearing aids as for as android. And Apple has lost my my support as Apple seems to keep making changes to the it’s standard to make its airbuds better and not tell the hearing aid companies about the changes until after the fact.
I will be getting a Google Pixel next time.
While i prefer android way too many don’t do any research about phones and then blame the hearing aid compatibility issues on the aids instead of where the real problem is.

By the way I prefer over the ear headphones over my hearing aids for most streaming. Why, people don’t see my aids and just start talking not giving my a chance to stop the streaming. Then they get mad at me for not living my life just for their whim. It never fails they don’t want to talk so I entertain myself by listening to an audio book or music then the decide they want to say something. And they are mad that I don’t dedicate every minute of my life to their whims. I have a life too. So I wear the over the ear headphones to broadcast what i am doing.

4 Likes

I guess we just have to wait and see what Android 15 will bring us.

Furthermore, BT connectivity is very essential for me. Without that I would have difficulty hearing on the phone, etc.

Over the past month I have extensively tested different phones with my Oticon Intent and Resound Nexia. For me, music streaming and hands-free calling were important in this test.

Unfortunately, MFI does not work well here in the house because there is too much inference in the house, making the connection very poor. So I did some research and it is purely my own personal experience.

I have tested various devices including the iPhone 15 pro and Max to various Android devices such as the Samsung S23+ and Ultra, S24 Ultra, Xiaomi 14, Google Pixel 8 Pro and Sony Xperia 1 VI.

What I find about the iPhone is that the Max has a better MFI range than the Pro version. Perhaps it is possible to deal with placement of the antennas?

Samsung sounds reasonably stable, but indeed the latest update has thrown a spanner in the works with regard to hands-free calling. For the rest it also works quite stable, I must say.

Xiaomi fell off fairly quickly for me. Nice device, but very inconsistent when it comes to LE audio. Perhaps in the future with updates it will work better?

Google Pixel 8 Pro is also not all that bad with the mediocre hardware internals and the bad modem is just a big negative for me. The device quickly suffers from overheating and problems with LTE / 5G. LE Audio does work stable, I say with the latest firmware version. I can walk quietly through the entire house, including the garden.
Hands-free calling works great. I am also interested in what the Pixel 9 and especially the 10 will bring us.

I am personally most satisfied with the Sony Xperia 1 VI. LE audio works fairly stable, just like the Google Pixel. Walking throughout the house is no problem. Hands-free calling works great.

3 Likes

I totally agree that a major issue is people not doing research before purchase and then blaming it on the phone or hearing aid. I’m less convinced that Google is the answer for Android. I think both Samsung and Google have issues (and also strengths), but again do your research if connectivity is important to you…

2 Likes

At this time I don’t thing the standards are locked in stone.
The problem I see is the same one that was there back as for as the late 1970s is that technology is moving faster in the development than hardware and software can keep up, it has been out of sync for ever and I don’t see becoming in sync at anytime soon. Development is ahead if hardware and hardware is ahead of software development.

1 Like

Different voices also have different pitches and timbres.

My personal experience after wearing Lumity 90s for 2 years is that they will lock onto the loudest voice if you are in a crowded restaurant with many conversations going on in close proximity at the same time. Basically you can’t hear the person you are talking to at the table but you can perfectly hear the loudmouth guy at the table behind you…hahaha. Not sure how AI would deal with this kind of scenario. With Phonak’s previous version of Stereozoom, the hearing aids would tell the microphones to go into a narrow focus forward mode and turn down the rear microphones so that you could hear the speaker that you were looking at. Lumity’s new version of Stereozoom seems to be dynamic and will focus on what it thinks you want to hear…but sometimes gets mixed up. Maybe Phonak AI chip will leverage the accelerometer to figure out which way your head is pointing and then do a better job guessing.

Thing is…in the restaurant scenario, it is very difficult to sort out what you want to hear. Good example is that you are talking to your dinner guest at the table and the waiter approaches from behind and asks you if you want more wine. How would the AI know to switch focus from your guest to the waiter who is talking from behind? Same thing goes when driving in the car. Perhaps their approach with the AI is to kill the noise and let you hear all the chatter from all directions? This is sorta the Oticon “Open” strategy.

Jordan

6 Likes

Apple to oranges, but color me cynical.
We have been using AI with surveillance cameras for a couple of years. It takes very large models of object images, faces, animals, persons, vehicles, etc. Like most AI applications, it takes a query (graphic in our application) and compares it to that database of objects. In our simple application, we have choices of models, having different types of animals, vehicles, people, faces, whatever. It takes the input, scans the model and comes back with the result and a confidence level.
Our application is simple, we only use a model with vehicles and persons. We look for them near gates and entrances and throw out any false triggers caused by headlights (at night), dogs and cats, blowing tree branches and shadows. At night, a local cat with a long shadow has identified as a person (this really angered the cat).
Using a mediocre video card’s GPU (with 4GB of storage) on a reasonably fast computer, takes up to 50 milliseconds for a result. Without the use of the GPU, it takes hundreds of milliseconds.
So, in my alleged mind, AI’ing the way through a party, bar, or a meeting, voice noise would seem to require a model of some sort to reject all “bad” voices and/or accept all “good” voices. All this keeping close enough sync with the visual of the voice(s) source that you want to listen to. Hmmm, maybe tune out your spouse at times?
I also think that Phonaks competitors are working just as hard.

2 Likes

Ha ha. Maybe. My Lumity are just over a year old. I cant justify it yet. But I’m interested in whats new.

3 Likes

Maybe you have set too much sensitivity for Speech/Motion Sensor? Or maybe it is a possibility to make priority to front speaker in Target software?

Cocoa hearing aids, should be ready not to far I would say 2027.

I wonder if the combination of Lumity + Roger On will still be a better set up than the Sphere alone…

Here’s a bit of marketing for Phonak.

‘The only hearing aid that other hearing-aid manufcturers fear, is Sphere itself…’

Badum Tish!

‘Try the Fish, I’m here all week…’

14 Likes

Clever use of wording Stephen… As an avid user of Phonak, and their Roger ALD’s… I for one tend to get underwhelmed with the usual hype associated with launch of any new hearing aid product line, tis predominantly marketing, usually on the periphery of some truth… We are all aware, speech in noise, is perhaps the “Holy Grail” of any hearing aid listening environment, and 29% better understanding in noise, is huge, and probably more akin to 10 or 15% in the real world? Having said that, any improvement whatsoever is most welcome… And yes, most of us HA users, would love some of the hype to be true, we would gladly accept a 10% increment of overall word recognition in noise, anything above this, is a bonus!!! Cheers Kev :wink:

9 Likes

What I am mostly struggling with is the speech in marketing noise…

14 Likes

Guys it’s the same old marketing hype, they’ll be no “holy grail” of hearing from this lot, better…well maybe…but obviously we’ll need to wait for real world experience, as they say “love” is in the eye of the beholder.

2 Likes

So do I, but in the case of Oticon Intent, the marketing had a solid foundation. Even dr Cliff on YouTube channel has said it is worth upgrading.

1 Like

The voice of experience (of actual AI development) speaks! And cuts through the AI marketing hype to tell what the platform actually needs to be doing to qualify as true AI - i.e. to be comparing inputs with an inbuilt trained model and selecting the optimum input. But that model’s not been trained on your personal hearing experiences, so just how could it do better than best guess? And how could it not often guess wrongly? And how is this qualitatively better than what my current Phonaks are doing?

4 Likes