Testing How Your Hearing Aids Work With Sound Files That Have Noisy Restaurant Scenarios

One reason everyone might be interested in the ReSound Smart Fit software is that it comes with a set of demo sound files. Independently of running the fitting software itself, these files can be found on a Windows computer in the ReSound Program Files (x86) folder under Common Files\MediaSoundFiles. They are provided on a Windows computer as .WMA files (Windows Media Audio) but with the VLC Player for Windows (the one you have to download from the VIDEOLAN organization site), you can easily convert the sound files to other formats such as .MP3. There are a variety of sample files of things like birds singing, young children talking and people in noisy places. For the people in noisy places, the audio recording is provided without any background noise and there are versions with the speech at 0 dB, 5 dB, and 10 dB above background noise.

So what we have here, then, is essentially a standardized test for anyone willing to find out how to get the Smart Fit software from the DIY section to test in the same way as anyone else how well their HA’s are working for them.

With my Quattro’s, I can basically hear everything that is said in the noisy LUNCH recording with the speech at 0 dB above the background noise. I have to say that it doesn’t sound like the worst noisy restaurant that I’ve ever been in. The most noisy restaurants I’ve been in probably have the speech at a negative level relative to the noise!

Unfortunately the allowed uploads for this forum only allow images so I can’t upload an .MP3 of the lunch conversation at 0 dB above background noise. So if you’re curious, you’d have to check out the Smart Fit 1.3 software yourself. It’s about a 1 Gb download, though, from the source.

Another very interesting feature only accessible from within the software is, after you’ve input your audiogram, the fitting software can play a much more limited selection of sound files both as a normal person would hear them and as you would hear them if you weren’t wearing your HA’s. The purpose of this feature is probably to demonstrate to a normal hearing companion of a hearing impaired person how serious the hearing deficit of a patient is and how correction would help restore hearing. For instance if I listen to the hearing impaired versions played according to my deficit (while wearing my HA’s so I can tell the normal version from the hearing impaired version), I can hear the lack of high frequency audio in the jazz example and in the children talking examples vs. the normal sounds when playing the media file unfiltered through my sound loss profile.

2 Likes

I know Phonak’s Target software includes something similar and suspect that other fitting software might too.

I wonder if any HA OEM has a site where potential customers could get at least a limited set of sample sound files? Then with link provided, we’d all have access to the same sound files without any DRM concerns.

You’d think since many HA manufacturers offer a qualitative hearing test at their website that after the web test captures an approximate picture of the test taker’s hearing loss that the website might play for the site visitor and any normal-hearing companions, the normal version of a sound track and the sound as the hearing impaired person would hear it. Just as for the fitting software itself, a web version of the sound files might be a good selling point to convince someone they need HA’s, especially when the hearing impaired person and a normal hearing person compare what they can hear.

EDIT_UPDATE: At the following link from the University of Rochester, Department of Electrical and Computer Engineering, there are some sample speech in noise files. The second table of speech in CASINO background noise is less annoying than the computer keyboard background noise examples.

Different sample speech files are played at different relative volumes as compared to the background noise, from -10 dB to +10 dB relative to the background noise. For reference, the clean speech files are given. And then the following columns represent different algorithms for trying to clean the speech up from the background noise. The 0 dB and +5 dB casino examples of speech in noise are pretty good tests as they are higher-pitched female voices.

http://www2.ece.rochester.edu/~zduan/is2012/examples.html

1 Like

Not to decry the value of speech in noise testing, but there’s an issue with this method which I’ll call the ‘TV problem’

In the real World, the mix of background sounds and speech occurs in spatial segregation - ie, slightly different directions, so not only does the hearing aid need to do the fast syllabic noise separation, but also offer directional nulls by frequency to avoid non speech/babble areas.

When you present the sound from a stereo (even with 5.1 separation) you’ll get a combination of speech signal and noise source from each speaker, which is unlike real World presentation ( like TV output). This basically means that the aids with better inter-syllabic noise compression will yield better results than those with more advanced directional pattern manipulation. Often the sound quality is deemed compromised, even though the speech may be more audible.

It’s often more accurate to produce signal sounds from a primary speaker, and invoke multiple noise sources from a surrounding speaker array to determine a better result which you can REM and re-test repeatably. If you put the client on a swivel chair in the middle of this array and use the V/U scale of the REM system to calibrate input level it becomes possible to determine degrees of performance for different aids under repeatable conditions for speech at different primary angles.

IMHO the second test is more realistic (and non TV-speaker/noise mix based) in terms of a client’s real World experience.

3 Likes

Excellent point! I thought of the same issue and I knew either you or Neville would have had something to say about this like you did.

The only thing a stereoscopic or mono speaker frontal set up can verify is how well the hearing aid can filter noise sources already diffused into the speech and coming from the front. But it doesn’t test how well the hearing aid manages the actual surrounding noises. Like you said, a better set up is to have an array of surrounding speakers, each simulating various noise sources as well as blended noise sources like babble.

For the OPN in particular, which builds a noise model of sounds from the sides and the back, the frontal speaker only noise test wouldn’t be very useful because the noise model created in this artificial set up would be pretty much null of sounds .

It would be very cool though, if a professional would go to the trouble of building such a set up for their patients. The most challenging part would be to go out and collect sounds from noisy environments to recreate that environment in the test room. For example, if you have an array of 10 speakers in the test room, you’d need to go out to an environment and set up 10 mics in the locations where those speakers would be in the test room and record the sounds from each of those locations. A lot of work.

The media in the ReSound samples can be played in a 5:1 sound system. In fact, to officially use it as an audi, they ask you to calibrate each speaker individually.

What I would imagine would be a pretty good representation (but simplification) of reality is if the sound to be emanated from a different speaker in each position of a test location were recorded entirely separately, e.g., for the noisy restaurant scenario, if the noisy kitchen behind you was recorded separately from the people at your table in front of you to the left and right were recorded separately. This seems to be the setup that um_bongo is suggesting for the ideal signal-in-noise test situation

Not sure if what um_bongo is saying is that the typical 5:1 recording to play back would be with mics in 5 different positions recording everything that they hear, including sound from the vicinity of the other mics. I agree that a test recording made that way is not desirable.

Although meaningful speech/sound emanating from one speaker and noise or confusing speech emanating from 4 other speakers around you is not as complex as a real world situation, it’s better than having no way of testing. Similar to the pros and cons of how good a tone audiogram is for testing hearing loss, I should imagine a speaker system that uses separately recorded sounds for each speaker location would even provide a decent test environment for determining how well devices work that use directionality to focus on speech and defocus on noise,

I have not had a chance to test the 5:1 output available from the ReSound Media Player recordings to see if each speaker sound is entirely “clean” and separate from sounds coming from any other speaker location(too lazy to hook up computer to Yamaha amplifier/5:1 speakers in another room). But I have listened to the train crossing sample sound. It’s very good in stereo in hearing the signal warning chimes “right in front of you” and hearing the train approach from the right (I think), pass in front of you, and disappear to the left. It’s just a very interesting set of real world sounds to listen to. Amongst the “bonus” material is the sound of a toilet flushing! Why they would want to include that is beyond me unless it’s just someone at ReSound having a sense of humor and seeing if his/her boss is paying attention to what he’s doing!

And Connexx for Sivantos brands (ie. KS7) has some sound files.

Toilet flushing is one of the things some people (particularly newbies) complain about as being too loud so I don’t think it’s a joke.

2 Likes