Phonak Audéo Sphere

Speech enhancer is a nice little feature for a lot of people and a feature I have to turn off for some. But I wonder whether the sphere program on the 90 and the 70 will be identical. For example, Oticon throttles the functionality of their opn dnn from the 1 to the 2–will Phonak do something similarly? Do the reduced channels in the 70 impact the efficacy of the AI denoising? I don’t know.

Interesting question. My gut instinct is . . . probably not on average. I think they would regularly activate the music program, but I bet they don’t activate the speech-in-noise program as much as you think. Movies often feel a lot louder than they actually are. They have a lot of sudden loud noises. I’m not sure they actually have that many sustained speech in noise environments that are reproduced realistically enough over surround sound to trigger the SNR programs. BUT, it’s an open question. Next time I go to the movies I’ll try to remember to bring some hearing aids any re-set the data-logger before hand. It would be movie-dependent as well.

Your screen sounds different than mine. I see quite an attractive deep burgandy red (not a shade that I would mistake for blood), a sort of moody-interior-decorating green, and . . . well, I’m not sure how the copper will turn out in real life. Maybe almost a pink-gold.

~$6200 Canadian. That’s decent pricing. But is Truhearing one of those third party American companies that reimburses clinicians peanuts such that you get next to zero coverage for follow-up care?

1 Like

From the following truhearing.com web page: How It Works | TruHearing

Ongoing support

We help you successfully adjust to your hearing aids by including one year of follow-up visits with your provider for fitting and adjustments. You also get educational information sent directly to you and a library of support materials.

So, the first year is covered and after that, it’s pay-as-you-go. But with my ReSound Quattros, I had almost a half-dozen issues with my left Quattro. Each time, ReSound replaced the complete HA body at no charge, and the audi said she was reimbursed by ReSound and charged me nothing through my entire 3-year warranty period with the Quattros. In my 2nd year with the Omnias, I’m pretty sure I damaged my left Omnia body microphones by careless, very sleepy cleaning with a Jodi-Vac needle. I volunteered to pay, saying I could see damage under a dissecting microscope. The audi said, “Let’s send your left HA body in for repair.” I lucked out in that ReSound replaced the HA body with a new one. Again no charge from ReSound or the audi. So, with warranty issues, truhearing.com is not involved. It’s between your HCP and the HA OEM. Especially if you are in a big city, you can get a list of truhearing connected providers from truhearing and shop around until you find an HCP you like. In San Antonio, there are both individual audiologists and chain hearing care centers connected with truhearing. As a State of Texas retiree, my Blue Cross-Blue Shield mentions truhearing.com as a source of hearing aids in the hearing aid coverage section. You do have to hunt around on the truhearing website, though, to find any mention of them selling the big-brand HAs. They used to have a database catalog that listed HAs by brand, price, features, etc. Now you have to personally contact truhearing or your truhearing HCP to find out about availability and price.

Hearing Aid Manufacturers | TruHearing
(scrolll down page to see major brands they sell through HCPs - they want to sell their own in-house brand first!).

But that’s all just manufacturer warranty stuff (also, Resound didn’t reimburse the clinician anything–that’s a misunderstanding, it’s not the type of financial relationship we have with manufacturers). How often were you actually in sitting down with the clinician getting your hearing checked, your hearing aids checked, reviewing your day-to-day function? Still annually? Semi-annually, or never? Were adjustments made after the initial fit? Some users don’t need them, but other users need more hand-holding. I’m just curious because on the professional boards I see huge complaints about the third party insurance system in the USA, and from what I gather those patients are not accessing the same care as private pay patients because the clinicians simply cannot afford to provide it at the level of reimbursement they are getting. Some patients are plug-and-play and don’t need anything else, but others do and it seems like the 3rd party insurer forbids balance-billing with the result that those patients who need extra care are just out of luck (i.e. the clinician is not allowed to charge extra and also understandably unwilling to work pro bono). A significant number of American clinicians appear to be moving towards simply dropping all 3rd party insurance business.

BUT, this is very tangential to the main thread…

2 Likes

It’s helpful. I’m glad you wrote

Yes, my basic point is that Audeo Spheres will be more affordable thru truhearing, it’s not a full-package deal, and YMMV, according to whom you pick as an HCP and their policies. But truhearing is not a fly-by-night operation. I was happy with my first fit and am a DIY’er; my hearing loss has been very stable over ~the last 14 years. Also, am not very discriminating or fussy about what I hear as long as I can understand speech. Was going to go to Costco for my next set of HAs in 2026, but if a Costco version of the Sphere doesn’t show up, truhearing would be my fallback option.

2 Likes

Thanks, Jim.

There is a likely connection between Sonova/Phonak and Audatic. While at the time of the article (2023) there were only vague indications in terms of business ownership, what stands out more today is that the Managing Director, Henning Hansemann, is now featured in Phonak articles and marketing videos.

The article linked is coauthored by Henning Hansemann, the Managing Director of Audatic. This suggests that Sonova probably partnered with or acquired Audatic at some stage and has since figured out how to run their denoise algorithms on a hearing aid-sized processing chip, instead of a smartphone, which was their previous model.

I’m curious to see how the devices work in the real world, and whether the compromises of this „groundbreaking“ technology are. AI processing is clearly the future of everything, however this is first generation technology, and Phonak has removed „Comfort in echo“, CROS support, automatic StereoZoom, and software indicates an option to limit sphere mode to 3 hours a day. Add Bluetooth classic streaming on top, and who knows if you’ll achieve the advertised battery life at all!

Al

2 Likes

The thing that struck me in the 2/15/23 Nature paper cited by Abram was the extensive discussion of the applicability of DNN denoising towards hearing aids and their optimism that with the help of Moore’s Law (! :joy:), they expected a DNN might run on hearing aids in a few years time:

Outlook

A challenge for neural network-based denoising algorithms is their computational cost compared to denoising methods traditionally used in hearing aids. As with most deep learning-based systems, the performance of our network improves with the available computational resources. Here, we limited the size of the network such that the algorithm runs in real time on a laptop. Hence, the computational power required to achieve the presented results is higher than what is available in current hearing aids and scaling the technology to the point where it can fit in a hearing aid still requires engineering more powerful hardware and/or more effcient models. However, the gap is not prohibitively large, and we speculate that Moore’s law and the exponential improvement in computational power per watt46 should lead to a feasible implementation on a hearing aid within a few years. Additionally, the rapid progress in the algorithmic effciency of neural networks47,48 should further shorten adoption time.

In summary, we have presented a denoising system that enables hearing aid users to achieve speech-in-noise intelligibility levels comparable to those for normal hearing listeners and generalizes across noise environments. Deep learning-based denoising systems could hence facilitate an entirely new type of hearing improvement that is directionally independent and could prove useful not only for hearing impaired users, but also for normal hearing listeners who wish to reduce noise in noisy situations, such as crowded restaurants or bars.

Since my search said that Audatic was a Swiss company, I made the unwarranted leap that Sonova must somehow be involved. Glad to hear there’s some further evidence that might be so. As the Nature discussion says, the two key things are the hardware and the computational design of the algorithm. If Phonak has both those things locked up, maybe it will be awhile before anyone else can catch up. OTH, maybe it won’t. The discussion points out that denoising would also be of tremendous benefit “… for normal hearing listeners who wish to reduce noise in noisy situations, such as crowded restaurants and bars.” Phonak doesn’t make earbuds (yet! :grinning:). I wonder if some company who does, who’s not going to compete with Phonak in the HA sphere (pun intended) might be their next big customer?!

Sonova indeed manufactures earbuds, however they do so under the brand of Sennheiser consumer electronic division, which they acquired a few years ago. They also released some OTC products under Sennheiser’s brand as well.

I would be surprised if any of the other big manufacturers have anything in the short term pipeline to compete with this new Deepsonic processor chip. If it works, and this still remains to be confirmed, Sonova could be headed for a long lead, especially with confirmed Auracast and BT 5.3 support.

1 Like

AudioExpress carried a short piece on Phonak’s announcedment.

https://audioxpress.com/news/phonak-unveils-ai-powered-real-time-speech-separation-in-noisy-environments

This quote might be germain to some of the discussion above. In reference to the Deepsonic chip…

"this chip is 53 times more powerful than existing chips used in hearing-aids and is capable of performing 7,700 million operations per second.

As a reference, current platforms available for consumer hearables include the Greenwaves GAP9 ultra-low power processor that offers up to 50,000 million operations per second (or 50 giga-operations). For perspective, Qualcomm’s Snapdragon X Elite Compute Platform (laptop class) includes a neural processing unit that’s capable of 45 trillion operations per second."

They’re a little salty on the whole thing actually:

“For some reason, these products targeted for prescription by medical professionals continue to be a complicated maze of too many options and multiple layers of different technologies that, in the end, do not benefit the people who need hearing-aids.”

2 Likes

Nothing on Wholesale Hearing yet, which look like they are affiliated. I got my P90s from them, as my Audiologist couldn’t get them at that price.
Peter

@PeterH

Are you planning to get some of these Aids when they are out in the UK, Wholesale Hearing?

@Zebras
Probably not, unless reviews are oustanding. I may wait for a battery version.

Peter

2 Likes

@ziploc
Maybe they’ll produce a Roger Mic with the Sphere chip in? If it’s as good as they claim, it may be a big seller.
Peter

3 Likes

From what I can see, it should be able to be added as a selectable additional programme.
Peter

Edit: or maybe not if it’s “attached” to the Speech in Loud Noise programme

I’m trying to understand what he meant by this quote

For some reason, these products targeted for prescription by medical professionals continue to be a complicated maze of too many options and multiple layers of different technologies that, in the end, do not benefit the people who need hearing-aids.

Is he being critical of the complex landscape of hearing aid options, as some of the new Phonak products launched this week have AI based features and some don’t? In other words, are there too many hearing aid models and this complexity doesn’t benefit the people who need hearing aids?

Or

Is he saying the technology itself provided in modern hearing aids are not beneficial to people who need hearing aids?

If the former, he’s not wrong, however how much can a guy who hyphenates the term hearing-aids really know about them? :thinking:

1 Like

Anybody know process size they use in their CPUs? I know the Sword processor from a ways back was groundbreaking with a 28nm processor when most competitors were using 65nm. I’m guessing this is notably smaller.

1 Like

Probably, the laptop did not have the specialized NPU (such as DEEPSONIC in Audeo Infinio Sphere), and the AI processing was done by a less dedicated GPU, which requires more power (I assume; I have not read about it yet).

EDIT:
Some specification of laptop here:

The CPU processor is from Q2 2018:
https://www.intel.com/content/www/us/en/products/sku/134876/intel-core-i58300h-processor-8m-cache-up-to-4-00-ghz/specifications.html

The graphic card with GPU is from Q4 2016:

Yes, I read the line you cited in the Nature paper by Audatic and its research collaborators. Earlier in this thread where it was questioned whether HAs have enough computational power, I mentioned NPUs vs GPUs as you rightly point out could make DNN calculations much more power-efficient: Phonak Audéo Sphere - #71 by jim_lewis.

But just for laughs, I guesstimated the TFLOP processing power and TFLOP/TDP of the Sphere vs. the NVIDIA GTX 1050. TFLOP is teraflop, a trillion floating point operations per second. TDP is thermal design power, usually given in watts.

An Internet search shows the GTX 1050 can perform 1.6 TFLOPs and consumes 75 watts. Thus, its TFLOP/TDP is 1.6/75 = 0.021 TFLOP/Watt (whaddaya expect for 2016!).

Page 4 of the Sphere white paper cited by @bigaltavista (Phonak Audéo Sphere - #236 by bigaltavista) says the DeepSonic denoising chip performs 7,700 million operations per second (7.7 billion ops per second). That’s only 0.0077 TFLOP. We don’t know if those are equivalent floating point instructions to those in the GTX 1050, but, Hey, this is just guesstimation. Let’s say they are. A Li-ion battery is a 3.7V supply. Most HAs consume 1 to 2 mA. Let’s say the Sphere with two processors and an 18-hour battery life consumes 4 mA. 3.7V x 0.004A is 0.015W. 0.0077/0.015 is 0.51 TFLOP/Watt.

Perhaps there will be even better performance in future, smaller process size design versions of the DeepSonic chip. 0.51 TFLOP/Watt is nothing to write home about. The Raspberry Pi 5 AI kit has a dedicated NPU on an M.2 HAT+ board. It can do 13 TFLOPS. It’s TDP is < 2W. So, 13/2 = 6.5 TFLOP/Watt. And I’m sure it’s not even a state-of-the-art NPU.

So, maybe if Sonova starts selling lots of Sennheiser earbuds with the DeepSonic chip inside, the economics of scale would justify going to a much smaller process size, which might increase the TFLOPS the processor is capable of while reducing the TDP (as has happened with smartphones, tablets, computers, etc).

Item TFLOPS TDP TFLOPS/TDP***
NVIDIA GTX 1050 1.6 75 0.021
Audeo Sphere 0.0077 0.015 0.51
Raspberry Pi 5 13 2 6.5
AI Board

Edit_Update: ***Note, I am confusing TOPS and TFLOPS. TFLOPS is tera floating point operations per second. TOPS is tera operations per second, usually integer (uses a smaller instruction bit size). GPUs are typically scored in TFLOPS. NPUs in TOPS. So, I’m mixing apples and oranges a bit. So, the guesstimate for the Audeo Sphere DeepSonic chip is probably most comparable to the TOPS for the Raspberry Pi 5 Hailo AI (NPU) chip.

The NVIDIA GTX 1050, of 2016 vintage, is probably so old that no one is really bandying about a TOPS rating for it. But NVIDIA’s RTX 4090, one of its most powerful recent GPUs, has an 83 TFLOPS rating (floating point-based) and a 1300 TOPS rating (integer-based). The TDP of the RTX 4090 is 450W. So, 1300/450= 2.9 TOPS/Watt. That’s why folks want NPUs for TOPS without burning a lot of watts. :slightly_smiling_face:

2 Likes

I think that will be a long long wait

4 Likes

a few months, hopefully much less