I’ve looked but can’t really find anything. I’m pretty sure Phonak’s is different in some significant way as I’m convinced they aren’t so far behind technically that they need a separate chip and more battery power to do the same thing. Hearing Tracker’s testing suggests the Phonaks are better–personal reports are mixed. I’m more interested in a technical explanation and less in which a person prefers, but I’ve learned posting does not let one control responses. Thanks.
Sphere uses more power and has slightly more AI soundscapes and is larger so no telecoil. Intent has t-coil standard which makes it much more compatible with existing and emerging assistive listening systems which are still better for hearing in noise (up to 25 dB) compared to AI (around 10 dB)
Have you seen the Andy Bellavia video where he interviews Starkey’s CEO and then the lead engineer? He asks them about their DNN solution and what it actually does. They dance around the subject (“we don’t want to make our competitors smarter than they already are”). So good luck getting an answer! The engineer says that they don’t take a pipeline approach. Phonak obviously do. Assuming spherical mode, at some point the sound is sent off to the dnn core. So, at that point denoising/speech isolation is taking place. Starkey?
I’m interested too, but Starkey seem to want to us to know that some super clever stuff is taking place and can we please just take it all on trust.
I’m not sure I’ve seen the video or not but have seen Starkey marketing about how superior their approach is. Frustrating.
Using a separate chip doesn’t mean that Phonak is behind. Signia was doing the same thing with a separate chip years before Phonak was. It just means that they’re attacking the problem in a different way.
I think there is something significantly different about Phonak’s approach but I don’t have the technical background (nor the information) to grasp it. My “gut” thinks that Phonak’s approach is more “active” and the other approaches are more “passive.” I don’t think Signia has a separate chip for AI or DNN, do they?
I believe Phonak’s approach is to use a separate AI/DNN chip to pre-process the sound before getting routed to their sound processing chip. Phonak’s Deepsonic chip has only a single function. It removes noise using their implementation of DNN. The other thing everyone needs to note is that Phonak supports Bluetooth Classic and will soon have a firmware update to support Bluetooth LE. I believe Oticon and Starkey use Apple’s MFi for Bluetooth. Phonak’s implementation has a higher power consumption too and that is why the hearing aid is slightly bigger. I suspect the Phonak chip is doing more processing given the higher battery consumption but who knows?
My advice is to try them all and pick whichever model lets you understand speech in noise the best. They each approach the problem in a slightly different way so some approaches may work better than others depending on your hearing loss profile.
I will also say that the ZipHearing YouTube video that someone share that compares all three models is somewhat suspect. The guy doing the review doesn’t actually have a hearing loss and is probably doing the review with a very open dome fitting. My experience which the Phonak Spheres confirms that you get more AI noise reduction benefit from these new hearing aids with more closed fittings (I wear double closed domes). We also all know that different hearing loss profiles make is harder to hear in noise so the only real tests that matter are the ones that you do with your own ears.
Hope this helps.
Jordan
Think of a separate chip like a GPU versus integrated graphics in a PC. The separate chip means you have many more transistors to throw at the problem. Theoretically, you can have a much more powerful solution (I used to be an engineer in the semiconductor industry).
Exactly, it feels like Starkey is saying that their chip is so far advanced that it can do the same thing Phonak can do. It’s like saying a CPU with an integrated GPU can do graphics as well as one with a discrete CPU. That is possible with a modern CPU with integrated graphics compared to older GPUs, but unlikely with current gen hardware. It seems unlikely that Starkey can do that but there’s so little info available it’s hard to tell.
The audiologist I visited believes both devices perform the same, with Starkey simply starting earlier than Phonak in implementing AI onboard their devices.
Actually, the Oticon Intent supports ASHA for android, and already has LE Audio working with compatable devices, in addition to MFI.
I can’t tell you how the Starkey’s work, but they do work.
I’ve written on the forum about my experience.
It’s possible the Phonak is better, I don’t know.
In noise they work.
I can attest to that.
My issue is that Oticon doesn’t support Bluetooth classic and I have devices that only use Bluetooth classic.
Jordan
Sonova HAs are the only ones that do support Classic, but your previous post alluded to Oticon only supporting MFI.
The future is bright with LEA, but microsoft, apple, and more android devices need to get on board with it.
All we know is that all 3 (Oticon, Phonak and Starkey) use the deep neural network to implement their AI. But the DNN is a very broad framework for which the companies use to implement their own ways of modelling the sounds in terms of the neural cells and train how they’re processed through the network to give the optimal results. Unless the companies reveal the details of their models, we would never know.
Oticon and Philips did say a little something about their models to give us a vague idea. Oticon said they sample the whole sound scene (12 millions of them), and break them apart into sound components inside the DNN and reassemble them at the outputs and train their DNN that way, at the sound scene level. We can only deduce that they probably insert “handles” in their DNN and make these adjustable parameters in the Genie 2 software to allow users to be able to exert some influence over the outcome by tweaking these parameters.
Philips said that they train their AI at the level of noisy speech samples (hundreds of thousands of noisy speech samples) and use the DNN to clean the noisy speech sample. Apparently this puts the focus on the speech, while the Oticon DNN puts the focus on the whole sound scene. The focus on the whole sound scene by Oticon seems to be consistent with their open paradigm approach, and the focus by Philips on the speech cleaning is evident that they don’t necessarily subscribe to the Oticon open paradigm as much.
Maybe Phonak and Starkey reveal a little something about how they model their DNN in the AI to achieve what they get, but I just don’t read their literature like I do with Oticon literature so I don’t know. I don’t wear Phonak or Starkey so I don’t have an interest in them anyway. But the bottom line is that unless the manufacturer reveal something about their DNN modelling approach, we simply won’t know. Some are more tight-lip than others.
I have the Starkey Edge AI 24 and a Samsung S22+ and streaming works fine.
I think the most important factor is finding a skilled patient audiologist who knows how to set up new hearing aids.
Jordan I appreciated that comparison. The gentleman has difficulty with word recognition in noisy environments
He also wants to avoid hearing aid apps.
I feel rude pulling out my cell phone to use the app. MyPhonak
I also need to hear behind and all around. I work on construction sites. I had real issues with my second set of Phonaks. I was almost hit 3 times.
So that reviewers comment resonated with me. The Oticon hearing aid as tested required no attention at all
But when I pull out the APP for my P90’s I often find I’m a dufus. (Sp). Strange volume settings. Or I’m not in the program I want to be in
I appreciate your comments. Wish I could hear
@DaveL, don’t you have manual “Speech in 360°” program? Perhaps it is worth trying if you are fitted binaurally? There is link for explanation of that feature:
I think Starkey is a few steps ahead. They made a new chip (processor) that allows the hearing aid to work for 51 hours. And with Edge mode, they say it works all day. Although I don’t have information about the detailed specifications, I think that Starkey made the processor in fewer nanometers than the competition and it is possible to offer much more in the same size. Phonak still pushes the old processor technology, with some changes and they added a deep sound chip and put a bigger battery, The hearing aid can last a whole day, but a maximum of 8 hours of Sphere use, if it is used more then it is not possible to use it all day…
No I don’t have speech in 360. Paradis P90R’s. I do have the Phonak Pro report.
My hearing specialist has made real gains. I’ll ask him next appointment.
Thanks
DaveL