Real Ear Measurment

Real-ear measurements (REM) allow your audiologist to evaluate the function of your hearing aids while you are wearing them. This evaluation guides your audiologist in making the necessary adjustments to assure optimal amplification.The shapes and sizes of everyone’s ears, ear canals and head are quite different, and the degree and type of hearing loss varies from individual to individual. As a result, identical hearing aids may function quite differently in one person’s ear than on another person’s ear.

    To perform REM, your audiologist will place a tiny microphone into your ear canal. Sounds are then presented to the open ear to measure the effects of your open ear canal on the sound.  Your hearing aid is then inserted and measures are made of the same exact sound in your ear canal with the aid in place.  The difference between the two measurements is called the real-ear insertion gain of your hearing aid. It tells your audiologist a great deal about the quality of the sound you are receiving. 

    Once the real-ear insertion gain of your hearing aid is known, it can be compared to widely used amplification prescriptions, such as the National Acoustic Laboratories-Revised (NAL-R) prescription, the Libby prescription, or the Prescription of Gain and Output (POGO).  If there are differences in the prescribed sound quality and the measured sound of the aid, your audiologist can make necessary adjustments and take new measurements to fine-tune your hearing aid to provide the best possible sound, and therefore, the most possible benefit

SOURCE: http://www.audioconsult.com/rem.html

I would encourage other Audis in the forum to comment on this…


TiffanySQUIRT

Do all audiologists have the equipment to perform Real Ear Measurements? Are they all fully trained to adjust the hearing aids correctly according to REM?

Most people should, Unfortunately about 40% of disp/audis do.
I would say all pediatrics audis do, as it is a well recognize standart with children.

Validation SHOULD be an integral part of fitting. The eye analogue should clarify… Say you get a prescription from your eye MD, you go to the optical store you get the glasses. How do you know that what the Dr prescribe is what you are actually getting. For this purpose, most optical shops/ eye Drs use a lensmeter to verify…

In a similar way, Real ear should be used to verify that the actuall gain prescribed by the audi is actually getting. Be aware that real ear measures what you are actually getting. What you see in a fitting screen is what you should be getting…

Some of the time, real ear saves times… when it comes with fine tunning.
I do Real ear all the time with all my clients… This is something you should be asking for…


MAINE DISPENSARIES

Is REM something that only audiologists are able to do or should hearing aid specialists also be able to do it?

It is not rocket science, and the machines are not that expensive anyways.
Im surprise it is not mandatory in the US. In brazil for example it is now a compulsory procedure for all clients, GOOD FOR THEM.

While it is true that the most fitting software can do a good job ON THE AVERAGE, there are individual deviations… Hence you want to do it…

I use REM to demostrate how the instrument works on real time. Clients love to see this. One side effect is that you view what is your hearing loss, what is the input signal and what is what you are actually getting in the ear.

So as you can see MR ABC this is the voice of your wife this is why you cant hear this the signal out of the hearing aid. It is worth the added trouble…

I do it with everyclient… An over period of times, I have been able to accurately fine tune my aids…

The current data suggest that only 40% of dispenser uses this procedure
inspite of owning an equipment…


BIPOLAR DISORDER ADVICE

Can this be done with CIC’s?

Yes, it can be done with any instrument, CIC included. I completely agree with xbulder- the time it saves and the accuracy of fitting as well as diagnosing patient problems- far outweigh the costs involved. I perform REM on every patient, usually multiple times.

tecnically you shoulds even do it everytime you even change earmolds,

Do remember that the earmold have certain acoustical properties…

Rem is the ultimate tool to see what is it that you are really getting…


DIGITAL EASY VAPE INSTRUCTIONS

Xbuilder, Thank you so much for spending the time to educate us. If you have read my comments todate, you might gather that I care a great deal about how and what I am being fitted with. If we as hearing aid users were to take OWNERSHIP of learning more about hearing loss and the application of hearing aids etc, maybe the expections in general of users may force the industry to clean up their act and do something about “the 75% of all hearing aids fitted to first time users ending up in the draw!” Don’t get me wrong: I am not blaming the audio industry. I enjoy a VERY good relationship with my hearing aid fitter and audi of choice.:wink:

while i know it those numbers - in the drawer used to be real high when
i first started (15 years ago). I would tend to think they are not as high as it used to… Im wondering what are those numbers as well as the return for credit? anyone knows?


WENDIE 99

Hi xbulder

When you do a REM, how do you account for the processing done by the HA? Digital instruments convert audio to digital, do lots of signal processing, then convert the digital back to audio with a predetermined amount of gain. It seems like there is a lot more going on that just so many db of gain at a given frequency. Also, are analog aids tested differently than digital aids?

I second withears’ comment, thanks for helping educate lay people. :slight_smile:

Dag

You are correct that many of todays digital aids are doing more than older technology, but the bottom line with REM is a known signal in… and measure what comes out. What is different today than with older analog technology and REM?

two main things:

  1. Non-linearity, that is, most aids today handle soft sounds differently than loud sounds, thus different amounts of gain are applied. For REM, this means that measurements should be made for different inputs. The way I always taught students was to measure soft, average and loud, so that we know how the non-linearity of the aid is functioning.

  2. Noise reduction: With older hearing aids, we could just put in a noise signal and measure the output. Now, in order measure it accurately, we must “trick” the hearing aid. That is, noise signals previously used for REM can be processed differently by the hearing aid than speech, resulting in unreliable measurement. Instead, today’s systems can use speech or ICRA noise (a noise that is temporally fluctuant like speech) to show how the hearing aids are working for real-world inputs.

So… to answer your question… yes, digital aids, especially those with noise reduction have to be tested differently, but REMs are still the best and most accurate way to know what hearing aids are really doing in the ear. Hope this makes sense.

An interesting point to this is the AVS,
this rem system does not fit to a fitting curve but instead
uses soft, average and loud sounds to verify and ensure audivility

one of the things I use rem, is show noise reduction, i turn rem off
run rem and turn on
so the clients can see what the aid actually do.

while some Hearing aids have a live or demo function
most dont…

so it is helpfull


Iolite portable vaporizer

Hi docg and xbulder

Thanks for the information. Noise reduction is the hard part. With such a complex waveform, trying to define a “standard” for processing real world noise would be really difficult. Would a REM system be able to extract ICRA noise from something like restraunt noise? In other words, make a sample recording of restaurant noise, then add a predefined level of ICRA noise. Play that combined signal into the HA and measure the output. Could the REM machine determine the signal to noise ratio (ICRA to restaurant noise) improvement? It would seem that some form of signature analysis could be performed to extract the processed ICRA signal. In this case, the ICRA noise would be equivalent to a voice in the restaurant noise. Does that make any sense? :slight_smile: Thanks again for giving me an opportunity to ask dumb questions.

Dag

I might be wrong so I would ask the other HIS or Audis to correct me,
Generally you use REM either verify audibility meaning you can use diferent types of signals (soft , conversational and loud) to veryfy the dynamic characteristics of the hearing aid.

However, most users, verify how well our instrument resembles the fitting algorithm. There are 2 algorithm which are quite popular NAL and DSL

To my knowledge, each formula uses a diferent signal type I believe ICRA is used by DSL (im not really sure)… This type of signal is design so that features such as noise reduction, feedback canceler dont get in the way…

If you want to simulate how well you could hear in a noise enviroment with for example your wife voice you could use something like the AVS


GN SERIES

Xbulder is correct. The primary purpose of the REMs are to verify frequency response characteristics and audibility relative to a prescriptive target (think hearing loss precription here). DSL and NAL or NAL-NL1 are probably the most widely used. Different systems have different inputs available. I use and Verifit, by Audioscan and it allows recorded speech at various levels, ICRA noise, pink noise and even live speech, like my voice or the patient’s spouse/ family member. By verifying audibility for a known signal in quiet, you can make (based on articulation index) some pretty good estimates of how the individual will perform in other environments, background noises, etc.

In terms of verifying the noise reduction and/or signal-to-noise ratio improvement, most systems are not set up to do this. In fact, most hearing processing does not significantly affect the signal-to-noise ratio for real-world noises like restaurant noise. Digital signal processing can fairly easily reduce steady-state noises like machinery, car noise, fans, etc. But digital hearing aid processing is not capable of seperating speech from a background noise that is speech as well, like in a restaurant.

Directional microphones can significantly improve the SNR if the speech and noise are spatially seperated and the environment is not too reverberant. Directional microphone function can be verified using REM by performing front-to-back ratio measurement.

DAG- There are volumes written on digital signal processing in background noise with hearing aids and there are almost are about as many opinions on the best way to do it as there are hearing aid manufacturers. :smiley: Each one will argue that their method is the best. Having done some research in this field, I have a bias too and mine is that to truly improve the SNR, you need to be using directional microphones. Digital signal processing can only improve the SNR for a limited range of sounds, most of which are not real-world. Digital signal processing, can, in my opinion, do some great things to make sound and environments more comfortable, so, I am certainly not saying it ain’t valuable. I’ve have probably opened a can of worms I should have here. Let me know if this makes any sense. If not, I’ll quit ranting.:wink:

Hi docg (and xbulder)

Thanks for taking the time to educate a newbee. Clearly there is more material available than I can get through in my lifetime, and I do appreciate you help in giving me some of the bare essentials. The advent of digital HAs must have turned the entire audio testing industry upside down! Now, instead of this nice straight forward programmable filter, the processor and DSP introduce non-linear processing, and that processing can be different, for the exact same signal of interest (usually speech), processed in a slightly different noise environment. :eek: Speech in quiet gets processed differently than speech in noise. It’s gotta be tough to quantitatively measure the performance. As I think you’ve both said, trust your ears!

I’m an electronics engineer by profession, so can understand some of the details. Can you recommend any web sites that can give me a good overview of the signal processing involved? I’d also be interested in the technical details of the hardware. I’d really like to see some good close-up photos of the electronics inside a micro BTE instrument. Where can I look?

FYI, I’m going in this afternoon to get my first adjustment to the initial fitting. Things have gone really well so far, but noisy restaurants are still a challenge.

Thanks again for all the time you both put in on this forum. Have a great day!

Dag

up the top of my head
i think frye has some info on the real ear…
there are some papers lying around …
as far as pictures of electronics, I must admit
i have seen some in the service and repair manuals
but this seldom get in the hands of the public
even dispenser, i believe some wholesales do have
such material


Chinese Milf

As far as papers… Google “probe microphone measurements.” There are several good ones by Gus Mueller that are written for the average clinician, so they are not extremely advanced. In terms of technical drawings of HAs, you can get block diagrams on most manufacturer’s websites but they are very elementary. For the most part, though, nowadays, all of the work goes on in the chip which is usually just labelled “DSP.”

Basically, a digital hearing aid is a mic, pre-amp, anti-aliasing filter (which sometimes is the roll-off itself, A/D converter, “DSP”, D/A, anti-imaging filter and receiver. Most receivers and mics have a freq. response from 200-6000Hz so they can be used as the filters most of the time. There are several different sampling methods also: traditional, high-res sampling rates or Sigma-Delta modulation sampling (which is one bit of resolution at a very high rate). Probably more that you want to know and if I say much more, I will really show my ignorance (or at least a lot of what I’ve forgotten). Most of the technical papers on hearing aids are in IEEE journals, not hearing and audiology journals. In fact, a lot of the DSP stuff used in HAs is also used in cell phones and other digital systems. If you really want to get technical, I’d start looking in some the engineering journals. I’m too stupid to understand most of that stuff.

However, to bring it all back around to the real-world. There have been tons of papers written on how to best amplify speech, limit noise, and maximize hearing aid performance. I had to read many of them back in the day. Here are the take home points from many years of higher education in hearing aids:

  1. Speech sounds MUST be audible- whether analog or digital, if it ain’t audible, you ain’t gonna understand (hence the importance of probe mic or REM)

  2. Once we make speech sounds audible, adjustment to the the gain and frequency response may significant affect comfort, but as long as the same amount of information is audible, subjects perform about the same on speech understanding tasks.

  3. Signal processing doesn’t do a whole lot to change the signal-to-noise ratio for real-world noises. All of the studies on this show, at best, minimal improvements IN INTELLIGIBILITY for real-world situations. Not to say that there are not significant improvements in comfort and ease of listening.

  4. Directional microphones always win in background noise (except for extremely reverberant environments).

Hi docg

I will definitely spend some time with Google. I hadn’t thought of looking for IEEE papers. :o A couple of the guys that work for me are members. I’ll take a look and see what I can find. Obviously the heart of just about any digital HA is the software running on the processor. I had wondered whether there were hardware DSP circuits, or if it was all done in software, but after seeing latency of something like 15 ms, it was obvious they were using software. Which makes sense. Hardware DSP circuits would be much faster, but also use MUCH more power.

So far, from what I read, and from what I hear, trying to reduce the signal to noise ratio is pretty much a lost cause. Directional mics help the most, but the noise environment is simply too complex to be able to recognize the signal of interest and filter it from the background. For me, if I could leave the ambient sound alone and just increase the directionality of the microphones, I think I’d be better off. So far, the more agressive setting for noise canceling isn’t helping much, and actually makes most voice (even in quiet) “buzzy”. I suspect increased latency from doing more DSP in software. Just a guess.

I know that what I’ve learned so far is just barely scratching the surface, so I guess I’ve got something to keep me occupied for awhile. :slight_smile:

Thanks again. :slight_smile:

Dag