Widex launches the world's first machine learning hearing aid: Widex EVOKE

widex

#1

WIDEX LAUNCHES THE WORLD’S FIRST MACHINE LEARNING HEARING AID: WIDEX EVOKE™

Lynge, Denmark – April 18, 2018

Today, leading Danish hearing aid manufacturer Widex, announces the launch of the ground-breaking WIDEX EVOKE™ – the first hearing aid to ever feature advanced machine learning technology in real time. Together with major advances in sound technology, WIDEX EVOKE™ provides a clearer and more personal hearing experience than ever before.

Hearing happens in real life, not just in a lab or in a clinic examination room. The challenge of real-life hearing is that it is personal and happens right here, right now. That requires the hearing aid to be able to adapt and adjust seamlessly and instantaneously. For the first time ever, it is now possible for a hearing aid to learn from the user’s input and preferences – and even share this learning with other users around the globe.

WIDEX EVOKE is the first hearing aid to give users the ability to employ real-time machine learning, featuring intuitive new controls that quickly and surely guide users to their desired hearing experience. With WIDEX EVOKE users don’t have to remember issues with specific listening situations to explain to their audiologist when getting their hearing aid adjusted later.

New SoundSense Technology means users just tell WIDEX EVOKE what sounds they prefer by choosing between sound suggestions provided by their smartphone EVOKE app. The powerful WIDEX EVOKE processor then uses this data to deliver even better real-life sound, based on the user’s personal preferences when they need it, in real time.

What’s more, the combination of user input and machine learning enables WIDEX EVOKE to evolve and become even smarter as time passes. And over time, all EVOKE hearing aids will be able to learn from anonymous global user input to improve the real life sound experience even further.

“WIDEX EVOKE will forever change what people expect from hearing aids. I firmly believe that WIDEX EVOKE marks the beginning of new era in hearing aid technology – a new way of thinking. It is the first hearing aid that is truly intelligent and grows smarter as you use it. EVOKE not only learns on the level of the individual device but also across the devices in the EVOKE eco-system. The perspectives and the potential are breath-taking: Just imagine an EVOKE user in Paris benefitting from the input of an EVOKE user in Sydney. You can say that WIDEX EVOKE is the world’s first hearing aid that is intelligent today – and even smarter tomorrow”, says Widex President and CEO Jørgen Jensen.

WIDEX EVOKE will be offered in a full range of form factors and will become available in all major hearing aid markets beginning late April and continuing through May and June 2018.

About WIDEX

At Widex we believe in a world where there are no barriers to communication; a world where people interact freely, effortlessly and confidently. With 60 years’ experience developing state of the art technology, we provide hearing solutions that are easy to use, seamlessly integrated in real life and enable people to hear naturally. As one of the world’s leading hearing aid producers, our products are sold in more than one hundred countries, and we employ 4,000 people worldwide. Read more at www.widex.com

Brochures:
http://webfiles.widex.com/WebFiles/9%20502%204770%20001%2001.pdf
http://webfiles.widex.com/WebFiles/9%20502%204772%20001%2001.pdf


Abandoned thread
Self adjusting hearing aids widex
Abandoned thread2
Self adjusting hearing aids widex
#2

New Widex EVOKE hearing aids being introduced today at AAA 2018:

• E-Platform with Fluid Sound Controller
• Fluid Sound Analyzer with 11 Sound Classes
• SoundSense Learn
• New Widex fitting rationales

SoundSense Learn optimizes your hearing experience in a given sound environment. Whenever you encounter sound environments where you would like to optimize the sound, go to SoundSense Learn and follow the simple instructions.

Compare two different sound profiles, indicate which one you like more, and then let SoundSense Learn calculate your preference. The more comparisons you do, the more SoundSense Learn will learn about your preferences. You can then save the settings as a program and use them whenever you would like.




#3

Zero reaction from this one !


#4

I think this has great potential to increase satisfaction and perhaps reduce visits to the professional. At least in quick look through, I didn’t see any mention of their processor. I would think for AI they would need a more powerful cpu and would be touting it, but perhaps marketing experts said people don’t care about the cpu.


#5

I’m curious on this one. I know aids like the OPN don’t use pure machine learning, but isn’t there some similarity in the aid deciding which microphone and/or setting(s) to engage depending on outside noise? I know that’s not true machine learning, but I guess I’d be interested in hearing experiences when this comes out as I’m not sure what is truly “different”.


#6

Hahahaha; AI hearing aids. That’s funny! What if you don’t like feedback? AI could only turn down your gain, right? It couldn’t adjust your domes or molds. What good is that? Color me skeptical.


#7

Am I understanding this machine learning gizmo correctly that there are 11 sound classes ( sound environment classifications) and upon a user prompt, it presents users with 2 closely related sound classes results and the user picks which of the two sound classes they like better?

Then this selection gets stored in a universal database so that when there’s some ambiguity between two closely related sound classes and there’s no input from the actual user, it’ll picks the most popular choice of the two as chosen collectively by all users who gave their inputs so far as stored in the universal database?

At least this is what I’m able to guess after watching those videos about A/B choice input selection presented to users, and selections shared by users.

While it’s not clear what choices they’re talking about that the user would make, it went on to talk about 11 sound classes so I can only guess that the user gets presented with 2 closely related sound classes for that particular listen environment and asked to pick which sound class selection they think is the best for that listening environment.

Then the hearing aid would go on and send a snapshot of this environment data point and the user’s selection to the Widex database through their smart phone. So next time a user somewhere else runs into this same /similar listening environment, Widex would determine which of its 11 sound environments to select based on the most popular choice as selected by most users.

In effect, Widex depends on users’ selection to build a database of sound classification. Maybe it starts out with a few data points for each of its 11 sound classes and users would contribute more and more data points of their own to these 11 sound classes, eventually to a point where the database is robust enough to more accurately select the most popular choice of sound class for a particular data point of listening environment that a user encounters.

To clarify, this is just a WILD guess from me as to what this learning machine is about. I’m seeking validation from somebody knowledgeable enough about this Widex Evoke platform as to whether this wild guess is any good or not.


#8

I’m probably paranoid but I don’t like the sound of crowd-sourced, user-preference, data acquisition.


#9

That’s what I’m thinking, too. Hearing loss is so different between individuals that even for two people with a similar loss profile, they may perceive sounds differently. Since it’s SOOOO subjective, how can users share their preferences with each other? How can you be sure you’ll like someone else’s preference?

And it even sounds like Widex is going to take the collective input and force it on you?


#12

Well this is going to be interesting. I can see all sorts of data potential about which settings are preferred in which countries or age groups etc. Might give the manufacturer a better sense of what customers like but I don’t think I want to crowd source my settings either. Sounds more like training your aids to correctly identify the environment you are in.

What people like is not necessarily what people need. I can see they might have to turn this feature off for some new users until they get used to some amplification.


#13

I’m probably way off here but could it be something like:

First 30-60 days the aid/app occasionally asks the wearer their preference in a variety of different environments they encounter and the response (lower/higher gain, etc) is taken into consideration when the aids meets those environments in the future?

Although not much different than the current aid setup processes, this would make it more individualized and not crowd-sourced.

Maybe I’m reading it all wrong :slight_smile:


#14

If you review what the introduction said, it really sounds like it’s going to be crowd-sourced automatically and users may not have a choice to decide if they want that crowd-source selection or not.

Below is an excerpt from the intro:


#15

Huh okay apparently I can’t read this early haha.

That seems. . . very sketchy as hearing is so personalized. Unless there’s something not mentioned I don’t personally think this will work unless the crowd-sourced element is a very very tiny portion of the aid function.

Edit, unless the crowd sourcing is able to improve the software abilities like noise filtering? Though there’s probably hardware limitations.


#16

It is looking like the sales department was very involved in the idea.

Aids aren’t all that good in deciding what the actual environment is. The forum is full of remarks like “when I am in my living room” and “the car is” where the results are unsatisfactory. Now how does crowdsourcing react to a room with reverb or with a whine in one car and a window down in another. What the real world says is that sound/noise is a complex environment that can’t define itself from a questionnaire on one’s cell phone. Heck, it is a problem one on one with a trained audiologist.

And all that’s ignoring the variable loss/problem unique to each user.

And what about that perky tour guide. The length of the interview was enough of that. I had a sugar overload.


#17

I would agree that marketing department was heavily involved. I don’t think there’s enough info to really grasp how this is going to work (or not) If implemented well, I think it has potential. If not, it could be a cluster :wink:


#18

Gimmickry and hype just like the smart phone industry…


#19

Wow. Them’s’re fightin’ words! :slight_smile:
I consider a smartphone as like having a compact, very portable computer on you. Oh and look, I can also talk to people with it. And text and email without pressing the 3 key 3 times (or more if you run past the letter you wanted). The real kicker for me was making its own hotspot.
WRT the thread…I’m with ya.


#20

I think more likely the makeup of the “select from” profiles will contain crowd sourced elements.


#21

I know a few people have been fit with this product already, so it’s not too early to post this review page. We’ll be adding more form factors and technology levels soon. I am really hopeful that the machine learning tech actually makes a difference. Anders Jessen is one of the few people in the industry I respect and he seems to believe in the technology… so fingers crossed. Oh, and extra $5 shop credit for the first 10 Evoke reviews. Just send me a PM after you leave your review.


#22

It does sound interesting and seems to be a different approach to possibly dealing with some problems that HA users may run into. I don’t pretend to understand how it works but if it does I think I would be interested in these. Glad to see this and hope we can see some results soon.