Phonak unveils Lumity hearing aid platform

Yeah, I don’t think it will happen. There is some precedent with the Marvel getting a significant FW upgrade and even more surprisingly the Costco KS9 got it too, but I’ve never seen it upgrade to the equivalent of a new model.

1 Like

In the 2022 white paper by Jespersen et al. (link below), the authors explain how they get the 150% improvement - and give it as a factor of 2.5. The results are shown in Fig. 4 from that paper.
image

And quoted from the paper, here’s how they do the math:

Results

Both legacy Ultra Focus and Front Focus showed significant
improvements over their respective omnidirectional modes.
Of even greater interest was that Front Focus provided a
significant improvement over Ultra Focus (Paired samples
t-test: t = 6.17, p < 0.001). The mean directional benefit for
Ultra Focus was 4.3 dB while Front Focus provided a mean
directional benefit of 8.6 dB. Front Focus provided a signifi
cant improvement of 4.3 dB over Ultra Focus (Figure 4). By
rounding this benefit up to 4.5 dB, this would represent an
intensity ratio of 2.5, allowing us to estimate an incredible
150% improvement in speech recognition in noise for Front
Focus compared to Ultra Focus.

To get the 2.5 factor, they converted their dB difference to sound intensity, apparently.

image

The sound put out via Front Focus is 2.5 times more intense relative to omnidirectional background than the sound output relative to background noise with the One’s Ultrafocus. (when I do the conversion, I get a sound intensity ratio greater than 2.5, ~2.6 if I use 4.3 dB and ~2.8 if I use 4.5 dB - so it’s possible I don’t really understand where their 2.5 factor comes from)

See middle white paper on this support page: Hearing aids ReSound - ReSound OMNIA support
[13512_136155017 (webdamdb.com)

3 Likes

From skimming through the user guides, it’s still contact charging, not induction charging. Although contact charging might be finicky if you don’t get the contacts right, it’s more efficient (~95%) in charging Li-ion batteries than induction charging (~70%) and you don’t heat the batteries as much for the same charge put in (~30% of induction charging is wasted as heat). Various manufacturers say the heating of batteries by induction charging does not significantly affect the product lifespan but when Phonak wants the devices to last 5 years and has no control over what room temperature a user decides to charge their batteries at, contact charging might be a better bet (also, the HA body has to have a pickup coil, which might add to the size of the body). Phone manufacturers usually define product lifespan as just a few years until Apple made them think different (sic)…

Link to user guides in following post:

2 Likes

Thanks Jim. So the 150% wasn’t a mistake. Silly of me to think otherwise really. I honestly don’t understand the maths, but we probably both agree that to go from “an intensity ratio of 2.5” to “150% improvement in speech understanding” in their press release is more than a bit of a stretch.

Edit_Update: d_Wooluf and I have digressed here from the Lumity. What would be most meaningful is if ReSound and Phonak would each agree to do such a test as described below on the Omnia and the Lumity. Then we could have more of an apples-to-apples comparison and the numbers might really mean something as to what the current competition is like BETWEEN the latest premium HA’s from each OEM. But when they just stick to making comparisons against their older models, it’s not very useful. ReSound’s older hearing aid might be terrible relative to Phonak’s older hearing aid in SNR (or vice-versa). Most people are interested in current differences ACROSS HA lines, not just WITHIN a given line.

I’m not defending ReSound. Just trying to state the facts (but maybe doing a bad job of that!). But the test done was one of speech recognition. I may have the ratio wrong. Perhaps the dB difference is that the Omnia users of Front Focus achieved the same level of speech recognition as the One users of Ultrafocus at a sound intensity of speech 2.5x LOWER in intensity than that presented the One users relative to a constant 70 dB level of noise, i.e., the Omnia users got better SNR scores. The competing noise was described as “static speech shaped-noise” and the frequency range of the test was described as 500 to 4000 Hz to avoid ceiling effects.


The level of the speech is manipulated to determine a speech reception threshold (SRT) of 50% correct performance, resulting in a dB SNR score, with better performance revealed through lower dB SNR scores


Here’s a description of how the test was conducted from the link in my post above to which you are now replying:

Test material, conditions, and setup

The participants completed a speech recognition in noise
test that was a slightly modified version of the Dantale II
test. The test is comprised of five-word sentences and was
presented in a background of static speech-shaped noise at
70 dB SPL. Thirty sentences are administered for each test.
The level of the speech is manipulated to determine a speech
reception threshold (SRT) of 50% correct performance, re-
sulting in a dB SNR score, with better performance revealed
through lower dB SNR scores. The hearing aids tested have
adaptive features that rely on identification of speech and
noise in the environment. To ensure that all adaptive features
were activated during testing the Dantale II test, noise was
started thirty seconds before testing was initiated. The test
was conducted in an idealized situation to maximize possible
benefit from the directional features. Therefore, the manually
selectable programs Front Focus (ReSound OMNIA) and Ul
tra Focus (ReSound ONE) were used. Note that Front Focus
and Ultra Focus provide the same directional response as the
Speech Intelligibility mode in 360 All-Around and All Access
Directionality, except that the crossover frequency in the low
band is fixed rather than configurable. In addition, speech
material and noise band-pass was limited to 500-4000 Hz to
increase task difficulty by avoiding ceiling effects.
Testing was completed with the participants seated in a
sound booth with speech presented at 0 degrees azimuth,
and static noise presented at 75 degrees azimuth to the
right. The positioning of the competing noise in the front
plane was intended to highlight the stronger directional
response - which can also be thought of in terms of a more
narrow directional beam - of Front Focus compared to Ultra
Focus. The setup is illustrated in Figure 3. The testing order
of conditions and the sentence lists were counterbalanced
across participants.

image

2 Likes

You’re right. We’ve drifted off. I’ll take any response to the Resound thread. I agree with this:

1 Like

I’m not so sure if this statement that Phonak is using AI since Venture is true. I looked up the marketing for Phonak AutoSense OS and below is a screenshot that describes it. Then I looked up AutoSense 5.0 for Lumity and the second screenshot describes it. In this second screenshot, it says that AutoSense OS 5.0 has been trained with AI. But in the first screenshot for AutoSense versions PRIOR to AutoSense OS 5.0 for Lumity, there is never any mention about AI.

So my conclusion based on these 2 marketing descriptions for AutoSense is that prior to AutoSense 5.0, some kind of sound recognition technology was used for the pre 5.0 AutoSense, but it wasn’t AI based or else I’m sure Phonak would have mentioned it. But starting with AutoSense 5.0, Phonak started training it with AI, and sure enough Phonak started taking credit for it and starts mentioning the AI now, but only in 5.0.

What Neville said above is not quite correct, and what WhiteHat clarified is correct. For Neville’s statement to be correct, it needs to say “There is no continuously learning AI in Oticon hearing aids”. There is AI built into Oticon HAs alright, except that the AI has already been learned/trained ahead of time, and the “frozen” learned/trained version is implemented into the Oticon HAs. It just doesn’t continue to learn anymore. So it’s only smart up to a point, and it doesn’t continue to get any smarter anymore. But the learned/trained smart IS there alright.

I’m not sure About the venture but the 4.0 already uses machine learning to train environments, you can see here the video of phonak about 4.0 https://youtu.be/AL25WHtPLdY around 1:00

Thank you @edgars for the link to the Youtube video on AutoSense 4.0. According to what they say in there, then I do agree that Phonak used some kind of machine learning algorithm to train AutoSense 4.0 into recognizing what kind of sound environment the user is in. So I do take it back in light of this new information that I didn’t have, and agree that Phonak did use AI prior to 5.0.

I do find it interesting, however, that Phonak never chose to use the words AI in 4.0, but in 5.0, they begin to use the words AI. Maybe just because AI is now the buzzword that everyone uses. By definition, machine learning is AI, although I guess there can be varying degrees of AI. The video mentioned that they started AI machine learning with Claro back in 1999, although with only 2 settings for what they used to call AutoSelect. Fast forward to AutoSense 4.0, they mentioned 200 different setting combinations. Oticon mentioned 2 million sound scenes captured for AI, albeit the AI systems and purposes between Oticon and Phonak are apples and oranges, so it doesn’t necessarily imply that more captured data for training is better. There’s probably a point of diminishing return when more captured data wouldn’t make as big a difference as before. And certain AI setup would probably not need as big a data set as another setup.

2 Likes

Had a conversation with my audiologist yesterday. Shes been told that there’s 3 different chargers to be offered with the Lumity. They’ll all be smaller in size than the Marvel / Paradise chargers.

2 contact version chargers will be offered; 1 without batteries and 1 with built-in batteries.

The life version “water proof” hearing aids will have induction charging. It allows the case to be sealed better from water.

3 Likes

Yes, this. :point_up_2:

I know that Phonak was using AI with Venture because they were talking about it with clinicians. But they didn’t talk about it as if it was a big deal, it was just part of how they trained the automatic program to recognize different sound environments. I’m sure that implementation has advanced dramatically since then and that what Oticon is doing in their current devices is ahead of what Phonak was doing back in 2015, but I am equally sure that all other manufacturers have been evolving their implementation as well. The big change was Oticon landing on it as a dramatic marketting tactic. But Oticon marketting has always been . . .well, impressive, frankly. They are good what they do. They are also the people who marketted the hell out of ‘soft speech booster’ as if it was something fancy, when it was (per direct conversation with researchers at Oticon headquarters in Denmark) just a static increase in gain for soft sounds in VAC+ that any clinician could implement in any hearing aid with a few clicks. Note that I am not saying that Phonak was first. I don’t know who was first. I imagine Oticon, Phonak, Starkey and Signia were all using rudimentary AI pretty early and it has evolved from there.

:thinking:
I mean, perhaps we are just quibbling semantics now, but I disagree. As you say, there is no further learning taking place on the hearing aid itself, nor does it make any new decisions beyond the ones it has been coded to make, or solve any new problems. It is, therefore, not AI.

To try to talk about the role of AI in hearing aids in a simple way: It used to be the case that humans themselves would input all the parameters of what made a particular sound “speech” versus what made it “noise”. With AI, humans do not define the parameters (I mean, in practice we still define some of the parameters and not others). We give the AI a pile of examples of speech and a pile of examples of noise and basically say, “computer, YOU figure out what the parameters are that make these sounds speech and these other ones noise”. We take the answer it gives us, check its accuracy, and use that to direct the automatic switching in the hearing aid.

(That is my understanding, anyway, as not-an-expert-in-machine-learning. I have a friend who runs a super computer who I defer to in all computer learning things. But mostly I just complain to her that the AI people use language from neuroscience to talk about their AI, but in a different way such that what is a “neuron” to them is different from what a “neuron” is to me. This drives me nuts.)

9 Likes

Forgive me if I don’t understand, but is this the Phonak Audeo Life or Lumity data sheet?
That is, the L stands for Life or Lumity?

The L is for Lumity. When you get to the chart that has 3 models, the RL is the Life version of the Lumity.
Hope that helps.

3 Likes

What does Phonak mean by a “platform”? They call Lumity a “new platform” following the “highly successful Paradise platform.” Some people here have said that the Lumity is using the same Prisim chipset? Do we know for sure the chip is the same? If the chip is the same, what else would be different at a hardware level between an Audeo Paradise and an Audeo Lumity?

Sorting out these changes is like trying to divine tea leaves. Prism was largely about connectivity (Bluetooth, Roger and Phonak’s proprietary method for TV Connector) Lumity introduces a new version of Autosense. Whether it’s all software or requires some hardware changes, I doubt we’ll ever know.

5 Likes

I feel this way about so much of what has been discussed. Like AI-ness. What they tell you, unless it is a dead fact spec, I tend to throw in the marketing cr@p pile. I could see them creating new software to run on the prism platform with very little change. I could see them making subtle improvements which don’t merit a new name, (but somehow think they’d rename it anyway) and new software. Teams I have worked on tend to come up with new names for generational shifts. The product released gets a new version number. When you completely redesign it, then the name changes. Maybe they have a paradise 2.1.4 internal number on the hw. But we’ll never know. How many angels can dance on the head of a pin? Who knows. What matters: Can you hear any better? And we won’t know that for a while. Like the other new releases. Someone has to wear them to cut away the marketing from down to earth “Here is what it does well. Here is where it still needs a good sprinkle of magic.”

I’ve never been involved in HA development. Most of my projects have either been allowing us to find something/someone so they can be destroyed, or coordinating that action.

WH

4 Likes

Agreed, it’s just about all marketing crap. Look at their websites-it’s all photos and feel good words. One has to know to go to the Pro section and poke around for some actual data.

2 Likes

Well, if you compare Luminity and Paradise hearing aids, almost everything is the same in detail.
It most likely has the same processor because they would emphasize that it is a new better processor, and another reason is that companies save money, so they try to push some older technologies as new. I’m not saying this hearing aid isn’t good, but it sure brought some changes like the induction charging of the hearing aid, new version chargers that are smaller than the old ones, and they worked on the autosense software to make it better. Unfortunately, that upgrade will not be upgraded to Paradise. I think that the PRISM chip itself has more space for upgrading and that is why there was no need to put a better chip at the moment.
I have a feeling that they will have to build a new chip for the new bluetooth, but the specifications of this one say that Luminity supports bluetooth 4.2, so they kept bluetooth classic.

1 Like

I was only thinking that the other day. Same with cars in some cases and Tesla in particular.
The other thing, at what point would you consider changing your aids if what you have work for you?
Obviously one reason would be end-of-warranty and an expensive repair option.
Do any manufacturers offer discounted upgrade prices?

1 Like

The datasheet for the Lumity L-RL can be found on the following Phonak Pro web page:

Phonak Audéo Lumity Literature | PhonakPro

The direct link is here: https://www.phonakpro.com/content/dam/celum/75095/PH_Datasheet_Audeo-L-R_210x297_EN.pdf

The thing that is striking is the runtime is ONLY 18 HOURS! - on a full charge. Is this true of the Audeo Life, too?! Hopefully, the runtimes of the other Lumity models are better - I couldn’t find the datasheets for those so far.

1 Like