‘AI’s solution to the ‘cocktail party problem’ used in court’
Interesting, particularly so given recent HA releases.
‘AI’s solution to the ‘cocktail party problem’ used in court’
Interesting, particularly so given recent HA releases.
I read the article when it appeared in my Google News feed. Seems to be a different approach than an active DNN such as in Phonak Audeo Sphere.
What they had come up with was an AI that can analyse how sound bounces around a room before reaching the microphone or ear.
“We catch the sound as it arrives at each microphone, backtrack to figure out where it came from, and then, in essence, we suppress any sound that couldn’t have come from where the person is sitting,” says Mr McElveen.
The effect is comparable in certain respects to when a camera focusses on one subject and blurs out the foreground and background.
“The results don’t sound crystal clear when you can only use a very noisy recording to learn from, but they’re still stunning."
Sounds more like a “location-based” elimination of sound based on direct mathematical processing (although A.I. is mentioned) rather than A.I. differentiation of “this is noise, and that is speech…”