Very successfull non-standard DIY fitting strategy

I’m new to DIY hearing aid programming, and my first experience at it has been dramatic. I have moderate high frequency hearing loss, and the quality of sound I am getting now is far better than what I’ve had for the last 4 years --since I stared wearing HAs on a regular basis. I used a different (unique?) fitting protocol, and I’d like to share it with others here.

My HA software (Starkey Inspire) has a Fine Tuning section where you can specify the output volume for a dozen diferent frequency bands (200Hz to 8000 Hz) for any of 3 input volume levels (soft, moderate, and loud). The control looks just like a traditional EQ device with sliders you can move up and down. I cleared out all the existing settings (noise correction, automatic learning, etc) and took each slider down to zero for all input levels. I set the compression to 1:1. In a sound editing program (Sony Sound Forge Pro), I created a .WAV file with a sequence of tones (each 3 seconds long) that corresponded to the frequency bands on the EQ control. Then, with my hearing aids connected to the programmer and wearing a good quality set of headphones (Bose Noise Cancelling), I played the tones. From hearing-test software I had used previously (, I found that 200Hz, for me, had the least loss of all the frequency bands, so I used that as my baseline and set the playback volume of my headphones to a moderate level. For each subsequent tone (going from low to high), I would adjust the slider so that it had the same perceived volume as the previous tone. After I was done, I set the compression to maximum for all frequencies and input levels (3:1) and set the attack/release speeds to the maximum allowed.

When I finished, my hearing aids sounded FAR better. I watched a one hour TV drama with a standard loudspeaker and didn’t miss a single word. Before, I would need to wear headphones and then I would still miss a lot of words (and sometimes could never understand them even on repeated playback). The sound quality of my 2006 Volvo’s stereo has been “degrading” and I was just about to buy a new unit to replace it with. Now, with my HAs adjusted, it sounds beautiful. The bass and treble controls sound best at or near the center positions most of the time.

For some reason (probably due to the traditional, convoluted fitting scheme), both audiologists had way over amplified the mid range frequencies and under amplified the high end frequencies. I had been setting the bass and treble of my car stereo all the way up and even bought a sub-woofer to try to get a more balanced, less distorted sound. Now I see that I was trying to compensate for an over-amplified midrange. For non-music sounds (just walking around in life) the volume of everything is much lower than before (with the over-amplified mid-range cut back and the compression set to maximum). The world is quieter but the detail (high frequency information) is still there. The whole thing feels “lighter” and is just much more pleasant and closer to how I remember normal hearing.

I still have more tweaking to do (setting the EQ for soft, moderate, and loud input levels separately and experimenting with different compression strategies), but I had to share this with everyone here. My situation cannot be that unique.


interesting post. When at first you said “compression to 1:1” i thought “oh dear …”, but luckily for your ears, you changed that later on.

Your approach is based on loudness perception, this is a valid approach that is used by some audiologists, too.

However, you are very lucky to have gotten a result that works for you without calibrated equipment.

The fitting by your audi, which fitting rationale was used? nal-nl (the old one) uses too much mid-range, in my opinion, and so does dsl-io . nal-nl2 gives more gain for the high frequencies, and more compression, so this should be more close to what you have now.

Compression 3:1 is very high. I doubt that you can understand very much in a noisy space like a restaurant.

If you take things seriosly, go to an audiologist and have your setting checked with real-ear measurement. Most important is that you are not over-amplified in some range (which I doubt, as you use so much compression). Also, you could compare your setting to a real-ear verified nal-nl2 setting.

But after all, you seem to get along with your fitting better than with the old one, so: congratulations!

Are your hearing aids in the ear or behind the ear? I can use headphones, too, but I´d never trust this for a measurement, because my bte-style aids don´t get that much sound from my headphones.



Quick follow up: While the compression is set at 3:1 across for all frequency bands, the knee-point kicks-in at a 24dB input level. Starkey gave me a choice of several knee-point settings, and that one (alone) produced sufficient volume. So I’m only using a 3:1 ratio above 24dB; under that level the ratio will be 1:1. And BTW, I’m using in-the-ear HAs, so headphones seem reasonable, but sitting between two accurate studio monitors would seem to be as good.

The fitting program I’m using is NAT-R, which is how my HAs were programmed initially. When I positioned the “Fine Tuning” sliders at the bottoms of their range, I had zero amplification. By setting the gain one band at a time, I set the overall amplification that way. In doing this, I was trying to override any assumptions being introduced by my audio-gram and the fitting program. It didn’t completely work out that way because as I increased the volume in each frequency band, I was “turning over” control to the fitting program and audio-gram. I know that changing the fitting program after I made my adjustments caused the HA’s sound to change considerably.

It would be nice if they had a low-level programming option that did not rely on a patient’s audio-gram or on a particular fitting program. When it’s possible to make all the adjustments in real time in response to patient feedback in regard to things like tone level (as I’ve done), speech intelligibility (as I plan to do), or music enjoyment (perhaps by genre! as I’m tempted to do), there is no need to build it around an audio gram or use a predetermined fitting algorithm. If anyone knows of a way I can accomplish that, I’d really appreciate knowing about it.

Good evening Trevorman. As a long time semi-pro sound/recording guy this method appeals to me. I have a home recording studio. My mix room sports a pair of vintage 12" 3 way JBL’s and a wonderful device called a dbx Drive Rack Studio. Among other things this does an rta eq on the room providing a flat response at my listening seat. Seems like this would be ideal for fitting bte instruments.
Wonder if you have an update on your project. Thanks for the thought provoking post. Dan in Kansas City

Your method is really interesting. It sounds to me like you’re inventing a new fitting rule of your own!

But I don’t understand the sentence above (or maybe I’m not understanding your technique). Up until that sentence, I had the impression you were increasing the levels to match up with your perception. I don’t see how either an outside fitting rule or your audiogram would come into this.

I’m new to DIY programming and just doing initial research. I have Starkey WI110’s. I believe I can download the Inspire software for free. I’m not quite sure about connecting the HA’s to the software. I see there’s a Starkey Surflink Wireless Programming device for quite a bit of money. Is there a better/cheaper way to program these HA’s?

Hi, allenmoretsky. I think you’ve posted this same question elsewhere, and it’s off-topic in this thread at least. There are a number of threads concerned with just this topic – how to obtain software – and there’s also the Starkey forum under “Digital Hearing Aids.” Please don’t post indiscriminately.

The forum police have spoken… isnt this post in the dyi forum?

Point taken.

Hey allenmoretsky.
Stop asking the same dumb-ass question throughout these forums. Do you know the first step of self programming? Have you completed the first step of self programming? If not then you need to read!

Whew! Things have been getting vicious around here lately.

Sorry allenmoretsky;
Try asking questions that are more specific and you should ask them in your original thread in order to avoid thread crapping. Meanwhile, I suggest that you click some links in my signature line below and read…

The first step of self-programming is…

Ok hidden content guy lmao…

Hidden Content is a link. Click it. It may say…
To view links or images in signatures your post count must be 10 or greater. You currently have 2 posts.

If it says,
To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.
Then you need to login.

btw> What is a ReSound Alta Pro?
“I don’t need to show you my audio, to show you my impairment…watch my eyes follow your lips…”

Starkey HALO 110 BTE RIC
ReSound Alta Pro BTE RIC <<
Starkey Soundlens 110 IIC

Update on my DIY programming: Since I posted, I upgraded from the Starkey SoundLens (original model) to the Starkey Synergy (their current SoundLens replacement). What a huge difference! Music especially. For the Synergys (which has a different frequency band scheme than the HAs I described initially), I didn’t repeat my .WAV file process (lazy) but copied as best I could the overall visible EQ arc from the fitting software screen. That gave me something that was a pretty good starting point. For my hearing loss situation (age-related high frequency loss), I basically turned up the highest frequencies to their maximum volume, but kept them just under clipping. With the Synergys (which, Starkey says, has a 4X faster processor), that’s where the extra processing muscle is easy to notice. I could still use more amplification at, say, 9.5K, but I take it up to just under clipping. I use music to EQ my HAs and, for speech, it’s acceptably good as well. I switch back and forth between the 2 so often that having a separate program for each would probably be more trouble than its worth. I do have to say, I have very nice home system (RME digital to analog converter, several old Hafler amps, an 8-speaker studio monitor setup) and it now sounds the best when I leave the EQ on it completely flat. This is the hi-fi Nirvana for which I have been searching. When I was young I couldn’t afford the gear, and when I got older my ears stopped working!). Another thing, the HA self-EQ’ing process worked much better for me when I set my audiogram to flat. That way, the EQ sliders more accurately reflect the changes I have introduced. You can see my “audiogram” writ large on the EQ screen. I’m still stuck with having to choose a “fitting formula” (grr), but with a flat audiogram, I imagine I’m taking away most or all of its influence. Also, in my first attempt a this back in April, I actually cut out too much of the midrange (I guess I was just sick of it hearing it amplified after all those years). I didn’t realize this until I noticed that lead instruments (sax, trumpet, etc) were too soft. Again, music to the rescue.

I think that everyone who wears HAs should explore self-programming. And the industry (and entrepreneurs?) should start exploring ways of making this an easy and natural thing to do.

Yes, I expressed that in a poor way. I was basically trying to make the point that I didn’t like having to deal with a fitting formula. I wanted to sit at the controls (my EQ screen) and have complete control over how much amplification I received for any given frequency. You’re correct, I was increasing the levels to match my perception, but in a goofy way because as I turned up the volume of a given band (raised it from zero), I wasn’t just increasing the volume by the amount shown by the EQ slider’s position (which is what I wanted to do), but I was, in effect, saying to the system, “I don’t want this band to have zero amplification”, at which point, the fitting formula would jump to attention and tell the system, “OK, he wants the amplification in this band to be modified from what I (the fitting formula) came up with by the selected amount.” So, while I was still able to adjust the volume to match my perception, the behavior of the sliders (the correlation between their position and what I was hearing) was being mediated by a 3rd party in some (to me) unknown way. I wanted direct control that I could see.

That’s to answer your question. But since my initial experiment, I figured out that by setting my audiogram itself to flat (something I discovered by accident that I could do), I could likely reduce if not eliminate the fitting formula’s influence altogether and thereby get the “programming situation” I was after.

I have read up a bit on the whole NAT scheme (Australia of all places!), and I have to say, I am not impressed. Did they ever put a HA wearer into an actual listening situation (music or speech) and let the user empirically determine their preferences? It seems to me that the whole hearing aid fitment establishment is too top-down and paternalistic. I am living proof. I had “crappy” hearing aids for 6 years. Now I have great ones, and the main difference is due to a change in programming, not to the technology itself. People with hearing loss are being woefully under served.

With respect, I think you are underestimating the non-linearities in the system. It isn’t just the fitting algorithm. In fact, my understanding is that fitting algorithms are expressly designed to address all of these non-linearities, so that your perception is as accurate and functional as possible.

Thus, a factor to keep in mind is that fitting algorithms are, as I understand it, also designed to strike a compromise between speech intelligibility and audio fidelity. This is a problem for a great many, if not most, of us. An audiologist/dispenser should be paying attention to your word recognition score as much as anything.

I don’t know about your situation, but my understanding is that SNHL is generally speaking accompanied – or caused! – by degradation in the sensory apparatus. There’s a lot going on here besides some careless audiology geeks coming up with random new fitting rules.

But YMMV: I’m not knocking your experience, but I do think that what you’re doing is coming up with what is effectively a new fitting rule. Which is great. But it’s not to say that the existing algorithms are either badly made, or ineffective.

Nor is this to criticize your DIY efforts. I’m a DIY-er as well, but not because of any particular critique of the scientific/engineering models. I think I simply get a better fine-tuning this way.

Ouch! “careless audio geek”? I could address your specific points in more detail, but I have the feeling it would be in vain.