Graphic card used for noise reduction

I found this article interesting. It’s not something that you’re likely to carry around with you (although might be incorporated into a laptop computer) but has potential to enhance conference calls.

1 Like

Interesting article.
Seems it was originally intended for gamers with an advanced graphics card.
Slick idea for smart phones.

I think eventually there’s potential for smartphones but currently this capability is very power intensive. Their desktop cards require 160 watts or more and even their least powerful laptop versions require 80 watts. I suspect though that something like that is the future of advanced hearing aids if latency and power requirements can be overcome.

1 Like

For Zoom meetings and the like I wear headphones and use an external mic so that at least on my end I’m doing a decent job. My laptop is a few years old now and does have an NVIDIA chipset in it, but it’s Win 8.1 so it probably wont’ fly. Every little bit helps though.

We were doing that, better (more useful sliders) over a decade ago. CD-quality processing on a Pentium-200 was far slower than real time, about 10X realtime. I just tested on a 3GHz desktop and it runs 0.04X realtime, a negligible load on a CPU (let alone a hi-buck GPU. Yes that might be “all” the power of a mid-price cellfone, and even fully optimized a stretch for any on/in-ear processor.

It is old technology suddenly re-packaged for the sometimes truly abominable sound being skyped/zoomed-in from basements in this new no-studio world. (Lapel mikes would be better, but much stuff does not have a useful mike input, and most folks would have to order one and Amazon is running slow.)

I was speaking specifically of the power required in a RTX (ray tracing) GPU. I have no idea of what kind of artificial intelligence is used, but suspect it’s somewhat more demanding than anything available 10 years ago.

Here’s a review: NVIDIA RTX Voice: Real-World Testing & Performance Review - It's Like Magic | TechPowerUp

There are some sound sample comparisons.

I think you mean the old CPU ran at 0.1x i.e. one tenth of real time or processing took ten times as long as the audio clip being processed? And the new one it takes 4/100s of the time to process as the clip is long.