Distortion of high notes listening to music

I’ll dare to jump in here.
Interesting idea to have dual mics to both sides. But doesn’t it depend on how each mic is received and processed in the hearing aid(s)? If you can have one mic to one HA then that’s an interesting idea.
I suspect that each mic is being shared to both HA’s. But of course I could be wrong.
And corona1…interesting thought about losing directionality with the factory necklace products using them as an external mic.

Uh oh. Replying to my own post. But in any case, unless there’s direct bluetooth connection with the aid, any of these earlier solutions rely on the telecoil for the last hop to the aid. And wow telecoil bandwidth is the pits.

Telecoil tech study

I think it’s just aftermarket neckloop necklaces that can only use the telecoil. I seem to think the factory necklaces use another transmission method. How else can they produce stereo? Telecoils are not.

With all the advances and Bluetooth I would think someone could come up with a direct link to stereo streamed to HA so it would bypass mic overload. I do notice a lot more detail with HA but the distortion of these high notes ruins it.

Maybe I should just use an EQ in stereo system. Maybe I’ll hear the cricks/night scene in beginning of Sun King!

Hi:

These thread is vey very interesting

From five years ago, I have been using HAs. The first brand I have were a Siemens pure 701p.

I am very fond of music, and music hifi equipment. With the Siemens, I left music. Even I was depressed because hearing music was very unpleasant. I need music in my life.

4 months ago my audi offered me to test a pair of Unitron MoxiFit Pro (Tempus Platform)

Answer: Impressive for music. These HAs have seven different “ambients”. The more impressive characteristic is the HAs are capable of detect the ambients. I don’t know how it works. One of them is music (only Tpro model). They work perfect with my Rotel amp and my Focal JM Lab loudspeakers.

I don’t feel any distortion.

By the way. Do you think if there are a best HAs for music?. I am very vey pleased with Unitron.

Not sure, but I read that some of you are a professional musician

Jpeinado

1 Like

There’re crickets!? :slight_smile: Just kidding. Fortunately. I feel for those that can’t.
I don’t use any fancy audio equipment because obviously I wouldn’t likely detect the difference but I do use my computer playing flacs. I mostly use headphones without the HA’s when listening to music. I have a program on my computer that I can adjust the eq to a percentage of my audiogram for each side. Works very well for enjoying music.

So it is! TIL it’s Near Field Magnetic Induction. I’m really trying to find out some specs here, in particular about audio bandwidth. I’m not really succeeding. Info re dynamic range is probably impossible to find(?)

But Siemen’s (i.e. Signia now) remote bluetooth transmitter, that works with their EasyTek NFMI neckloop-thing, has audio bandwidth of 7 KHz. Not exactly music-friendly. I’m not sure if that’s the bandwidth of the microphone/transmitter itself, of the bluetooth link to the EasyTek, of the NFMI connection to the aids, or of the aids themselves.

There’s some Widex promotional info (in educational garb) that talks about how their proprietary system has amazing dynamic range. See below.

Siemens VoiceLink product info

(ReSound powerpoint) Hearing Aid Connectivity: Where have we been and where might we be going?

Widex powerpoint notes re wireless technologies

16 bit digital audio has a theoretical limit of 96 dB of dynamic range. Some hearing aid models with only 16 bit processing do some clever special processing to try to exceed this theoretical limit. But if you pick a hearing aid model that has 24 bit processing or higher, at least this should help.

Although the dynamic range of your impaired hearing is reduced (and that’s what the WDRC Wide Dynamic Range Compression tries to adjust and fit the normal dynamic range into your reduced dynamic range), this is simply the issue of the upper limit of the dynamic range we’re talking about here and it really doesn’t have anything to do with your more limited dynamic range.

So yes, this does make a difference the real world, because if it’s garbage in, it’s garbage out. The higher/wider dynamic range for the input mics will result in more natural, clearer sounds just the way they come into the mics. Even if this sound gets compressed afterward to fit into your more limited range, it remains clear and natural.because that’s what comes in.

On the other hand, if the input mics do not have wide enough dynamic range and/or processing resolution to capture the input sound and has to clip it or severely compress it coming in to fit it into the limited range of the input dynamic, then this sound become garbage/distorted. Once the source of the sound is distorted, no amount of processing can un-distort it. Even if translated/compressed into your more limited dynamic hearing range afterward, it remains distorted, and that’s what you hear.

Regarding whether the feedback manager can really distort the sound or not, it doesn’t hurt to try to turn it off to see if it helps.

But generally, most feedback managers employ 3 principle strategies to reduce feedbacks. 1) It does a phase change. 2) it does a small (10 Hz usually) frequency shift, and 3) it may do a gain reduction in specific resonant frequencies found by the feedback analyzer. The frequency shift in general should be the only effect that may introduce some distortion in the way of some warbling/fluttering when listening to single tone sounds. The phase change and the gain reduction shouldn’t introduce distortion per se.

1 Like

Okay well here’s some evidence that definitely supports your concern about input dynamic range, even with speech. Interesting that it appears over the name of a company that makes transducers for hearing aids.

Evaluating Hearing Aid Processing at High and Very High Input Levels

Totally geeking out on the Knowles Electronics site. Searching for filetype:pdf yields some interesting stuff on hearing aid microphones. I can find lots on frequency response, but nothing on dynamic range :frowning:

Corona1,

I will take you up on that Connexx offer. Curious to see what you have done. As you can see from my audiogram, my highs are gone. I use frequency compression on the basic channel and it helps, but it takes a lot of experimenting to get it to work. For the Universal program, it is all about speech. My Music program is pretty straight forward with some playing around with the hot spots. philiplewis@att.net

I think I’m backtracking on my tentative agreement re dynamic range. Someone – @volusiano maybe? – has remarked here on the 96 dB limitation of 16 bit sampling – and I actually think the limitation we are talking about is maximum SPL. In other words, the maximum absolute “loudness” that the microphone will handle without distortion.

Because, 1, that 96 dB limitation, but mainly because of the expansion that get applied in the processing in the hearing aids. Everyone talks about frequency-dependent level compression, but I think we miss the fact that there’s a lot of expansion applied to low-level sounds.

The compression that sensorineural hearing loss seems to engender – not least because of recruitment – means, I think, that the dynamic range of our hearing is greatly compromised. So we’re not trying to get 110 dB (or whatever) through a linear chain. We’re trying to get 110 dB into, what, 60 or 70 dB by the time it gets to your brain?

“Dynamic range” is, essentially, signal-to-noise ratio. And given how bad the final processing is – between our eardrums and our brains – noise floor isn’t really an issue. So the aids can expand the s#!t out of lower input levels (and they do).

Maybe dynamic range matters because the microphones also have to handle low-level, low-crest-factor signals like speech? Maybe this is the point that I’m missing. If the microphone needs to be able to handle, say, 100 dB SPL for music peaks (or more?), then for low-level signals like speech, the effective SNR is going to be low (because of all the headroom that needs to be there for music peaks).

I really apologize for thinking out loud. I’m enjoying this conversation. I spent many years in the pursuit of linearity – haha except for the “loundenss wars” – so dealing with these wildly non-linear systems is really a puzzle.

Corona,
I’ll share a little more of my “understanding.” 16 bit 96db dynamic range is pretty typical for a lot of hearing aids. Bernafon had (I don’t what the case is with their new Zerena model–considering it’s similarity to Oticon OPN, it may also now have 24 bit) a reputation for being good with music because it made better use of it’s 96 db dynamic range by setting it to 15 dB to 111 dB instead of the more common 0-96. I don’t have a background in sound equipent and acoustics, but it sort of makes sense to me. I totally agree with you that most hearing aid users have really crappy dynamic range. I brought up this same question on a thread sometime–along the lines of “Consider what crappy dynamic range we have, why does it matter for the hearing aid?” The answer that made sense to me is the microphones cutting out if the dynamic range was exceeded.

How do you do your own tuning?

Hi shacky,
I had a similar problem when I got my Phonak Audeo B70 HAs: acoustic guitar notes, especially the treble strings, sounded “buzzy”, or “shattered”, not pure tones. It took a while to figure it out, but the main problem was that the “SoundRecover” processing was introducing a digital delay in the high-frequency tones that were shifted down to the mid-range, and when they were combined with the original signal the result was a fuzzy sound. The solution was to create a separate music program with all the bells and whistles turned off, just the simplest possible sound amplification, and it’s working fine for me - playing guitar is fun again, like it once was. I hope this helps.
John

Good point here. Disable any kind of frequency lowering feature your hearing aid may have if you notice funny distortion when listening to music. It may not be THE cause, but it’s definitely a factor you should eliminate.

Hi, I’m new to this site. I have just been issued with a Phonak Nathos S+ on the NHS and like yourself a guitar player. I am experiencing the same issues as you had. My question is, what does it sound like if you just left your hearing aid on the music program and didn’t change. Does everything else like speech, using the phone sound any different? I am asking the question because the last 2 digital hearing aids I had - Viennatone Newtone and Oticon Spirit 3, my current one, have no problems at all in this area. The Oticon Spirit 3 I am told is obsolete and so I am concerned with support.
Jim

Good question. Actually I often forget to cancel the music program after playing guitar and go for quite a while before remembering to switch back to automatic. I don’t notice any difference in sound quality with speech or telephone, but when I do switch back I get a slightly improved sense of three-dimensionality, or at least I think I do; nothing dramatic, but very slightly “better”. That may be the result of the special features being added back into the mix by the automatic program.
John

1 Like

Frequency lowering is primarily helpful with speech, specifically for words like “s” and “sh” and other fricatives that have presence in the high frequencies. It helps make those sounds more clear and more easily understandable by transposing them to the lower frequencies where the hearing loss is not as bad for common ski slope type loss.

I wear OPN hearing aids and its frequency lowering technology is called Speech Rescue for that very reason. I find it very helpful for me on speech because I can now hear the “s” and “sh” sounds much clearer. I can now also hear cricket sound, birds chirping, high and soft digital tones from appliances much more easily.

For the OPN everything sounds natural to me because the original high frequency sounds are preserved as is and not replaced by the lowered sounds. It uses a unique “composition” method where lowered sounds are added to instead of replacing the original high sounds.

I don’t notice any obvious distortion when listening to music so I rarely need to switch to the music program for music listening. But then other folks may have a more discerning ear than me when it comes to music listening.

Have a look in the DIY forum:

https://forum.hearingtracker.com/t/how-to-program-your-hearing-aids-diy/31749

I’m also a retired audio professional, both a systems design engineer and recording engineer, specializing acoustic jazz. My hearing loss is moderate high frequency rolloff, and I sometimes wear a pair of the Etymotic Bean. They have two settings, both of which gently boost the highs, one about 10 dB more than the other. I use the lowest setting, which gives about 20 dB of boost. I like them for jazz and speech in my living room, but I take them out for live music in clubs. They have enough dynamic range processing that they never overload in my living room.

The mics used for good hearing aids should not overload, but the miniature transducers (earphones) CAN overload if they’re amplifying too much, and for most of us, it’s the highs. For those who need a lot more gain than I do, I strongly suggest that you have your provider program a “wide range” setting for you and make sure that your aids allow you to turn them way down. Unless you have REALLY severe loss, that should keep good aids out of distortion. Also, with severe high frequency loss, tell them not to try to push your response too high in frequency.

In my living room, my best music listening experience is with a set of Sony MDR7506 headphones. I set tone controls flat for music; for speech, movies, etc. I turn down the bass and turn up the highs. When my hearing gets worse, I’ll use the speech settings for music.

Jim Brown