Oticon More adds disposable battery model and MyMusic program

The whole thing started when I was suggesting that @flashb1024 try to copy the gains in the Music of the OPN S over to a program in the More to mimic and maybe recreate the Music program in the More. But Flash said that he did try that, but even if he can mimic the gain values, the CR values don’t necessarily become the same as those in the OPN S Music program. Ideally you want the CR values to match as well so that the recreation is exactly the same.

So both Flash and I were talking about the CR strictly in the sense of how these values are set in Genie 2 for the Music program. But you chimed in about too much compression is not good for music. And there has been lots of similar comments in the past in various threads on this forum that people want NO compression at all (along with no signal processing) when it comes to listening to music in a music program.

I guess the engineering part of me wants to clarify that no compression at all is not necessarily the ideal approach like most people think (in terms of hearing aid compression in the engineering sense). The other part is that it got me curious about how the CR values differ between the original Music and the MyMusic program. So that’s why I tried to do an A/B comparison to find out.

Through this process, I guess I’ve learned that compression to you (and other folks who prefer little to no compression for music) is really not the same as compression as it applies to hearing aids. Well, technically, I think it’s the same, but it’s applied in different ways for different purposes. But nevertheless, maybe it was helpful that this discussion sheds some light to the differences in how we see and talk about in terms of compression, for music and for hearing aids.

The bottom line is that you do want and need some amount of compression in the engineering sense in hearing aids when the input volumes get loud enough, even for music. Otherwise, you probably can’t tolerate hearing music on loud passages because it’d be too loud for you to enjoy.

I edited and added the following paragraph to my previous post that explains why compression is used in hearing aids, but probably after you read the original post without it added in. It should explain your question of "How (the compression in) your chart affect the sound as heard by the HA wearer?"

1 Like

That’s very complimentary. I think, in fact, that Oticon should take your suggestion, but recruiting talent far superior to my own (if I have any!).

But just to be clear - I didn’t say I remembered how the music sounded before my hearing loss. I think I said something to the effect that the sound of Norah Jones’ singing is burned into my engrams. This is true, but the sound that’s “burned in” is what I heard through my AKG headphones, after EQing her tracks on my my mixing board to compensate for my hearing loss.

That EQd sound, repeated hundreds of times over as I arranged a few of her tunes for solo, fingerstyle guitar, is what my brain remembers. It’s the best quality neural signal of her music that I could send to my brain, far better in quality than what a tiny HA receiver is capable of delivering.

That’s what I remember, and I remember it well because I memorized it. So when I listen to the same tracks through my More1s using the old Music program, I can get a sound in my ears that’s not as good as the sound through my AKG “cans”, but it’s close enough to be really pleasant. It is, IMO, a faithful rendition of what I heard while I was arranging her stuff.

Listening to the same tracks, through the same hearing aids, but using the MyMusic program, here are the differences I hear:

  1. an unpleasant “boominess” in the lower mids which I believe @flashb1024 described as distortion,
  2. a tin-cannyness in the upper mids that is, simultaneously, both loud and indistinct. This sound quality doesn’t convey the dynamics of the music well, at all,
  3. a round, pleasant bottom end can be “tubby” when it needs to be more distinct, especially when reproducing a fretless bass line. Begins to sound like the bass from a big, but cheap K-Mart boom box after a while, and
  4. an overall muffled, indistinct quality to the sound, even though it’s loud (mushiness might be a suitable descriptor).

That’s what I hear, in relation to my reference memories, which are of my board’s output, through a parametric EQ, into excellent AKG headphones that deliver a lot of high quality sound to my damaged hearing apparatus.

The difference between the two programs - other than the sound I hear - is that I can actually manipulate the sound of the Music program satisfactorily with the simple EQ of the ON app. I can “get Norah back” with just a little tweaking in the app.

Not so with the MyMusic program. The sound of that program is quite resistant to shaping using the available EQ sliders.

Just to be clear, that’s not the same as saying that I remember the sounds I heard when my hearing was normal, though.

I just have finished rereading my initial posts about the MyMusic program. Wow! Has my opinion ever changed after a few days of playing with it!

I’m very interested now in getting a clearer understanding of

  1. What factors drove my initial impression, and
  2. What happened with my ears, brain, and psyche to cause the 180° shift in my opinion?

Am I ever glad that I reserved for myself the right to change my mind!

[Addendum: I’m not crazy! I have just fired up a few cuts by Doyle Dykes, who is a Taylor signature artist. I listened to the cuts, and then played my own Taylor for comparison. It’s going to take me a while to find the words, but I think part of what’s going on here is an habituation effect. MyMusic hits me with an initial “wall of sound” compared to the previous program. It sounds rich, round, full, and bassy at first. It’s an unexpected shock to hear so much musical sound through HAs.

It’s only after the initial shine has worn off that I begin to hear the distortion, funny dynamics, and “wooliness”. More to follow!]

1 Like

The standard I/O charts don’t apply to the way Oticon does things though.

THey’ve used a system of ‘Floating Point linearity’ for years to basically draw a 45degree (1:1) line over a short period average time window. The idea behind this was to allow the aid to show the benefits of loudness growth (especially in speech) rather than compressing down the dynamic range of the output.

There’s a difference of opinion within fitting algorithms - some say that for maximum intelligibility, all speech should be amplified to equal loudness levels. Others say you should retain the natural cadence through changes intensity. Oticon fit the latter argument.

Music has more dramatic changes in intensity than speech and ‘should’ sound less molested than with the manufacturers going for equal loudness solutions.

3 Likes

Well, If it gives your engineering soul any relief, absolutely NO compression is no good in instrumental music, either. There are always little hot spots to smooth out, or some decay curves that are too steep, or some artifacts from micing up a guitar or amp that need the voodoo of compression. (Good word for !)

Pre- or -post EQ is the big decision - not no compression at all. For sure, I think that compression is different in the two different contexts, but I agree that rhere are a lot of valuable lessons to be learned in sorting it all through. Thanks for taking the time to explain the engineering side of it ! (Insert engineer joke here)

Yes, your assumption is correct. The OPN S Music program & original More Music program are afaik the same.

Absolutely true, that!

Actually, they don’t want us “USERS” getting into fitting at all.
But the CR’s should be accessible for qualified providers. I don’t go back far enough with Genie to know, but earlier versions , including the one used for the original OPN had access to the CR’s, as does Phonak Target, and I’m sure most other mfgs.

Actually @colorrama88 has a thread on Widex for musicians, in which he inserts salient points from 2 of the leading Musically trained audi’s:

I think that says it all, and here is another great read, if you have an opportunity from our Phonak friends

Completely on point.
I raved about it on 1st listen, as well after streaming some hi rez files, but soon came back to earth, after listening over speakers and live piano.

Spudmeister, you have captured the essense of the MyMusic mystery.

1 Like

Yeah, I should have said HCPs here and not “users” per se. You’re correct that they don’t want to promote DIY.

I also notice when I play around with adjusting the gains that the CR values may jump around and change automatically if I change the gains too much. I’m guessing that they want to make sure that the knee points on their compression chart for each frequency band are more continuous and to minimize any disjoints. Take the case of the gains and CR values at the 1 KHz example I used in one of my previous posts for the OPN S1 Music. As can be seen in the CR chart I recreated (shown again below for reference), there’s already a 10 dB disjoint at the 45 dB knee point, going from Soft to Moderate.

I would say that if you have access to force change manually both the CRs along with the gains in the Gain Controls section, you can possibly end up with very wide disjoint values at the knee points, which may introduce an undesirable effect into how smooth the amplification may sound.

Interestingly, if you look at my example in my previous post, the More 1 MyMusic program adheres to this rule very well, while the OPN S1 Music program has much higher CR values of as high as 3.7 in the Moderate to Loud knee points.

Hm, I guess even the experts don’t see eye to eye with each other either.

  1. Check.

  2. Yes, this makes sense, especially because one of the common strategies used for feedback management is a slight 10 Hz shift, resulting in a potential fluttering effect, especially when pure tones are heard. But sometimes feedback management is a necessary evil because you can’t help it and don’t want to have feedback when trying to enjoy listening to music.

  3. Yes for disabling frequency shifting used by feedback management if you can help it. In terms of frequency transposition for frequency lowering, ideally yes, you want to disable it to remove any musical aberration. But in my personal case, I keep Speech Rescue enabled even in my music program so I can enjoy some of the highs that I already can’t hear. Otherwise the music sounds more dull to me. So it’s a trade-off between having pure unadulterated music vs being able to hear something in the highs that’s missing.

1 Like

I think the only people who’ve ever heard this are those who have had the opportunity of listening to music in an anechoic chamber.

Usually, room reflections, with their resultant standing waves, phase cancellation effects, and temporal anomalies inevitably make almost all music listening “impure”.

I also think drawing inferences from the theoretical analysis of a single tone is pointless. My experience is that all the tones in adjacent bands interact with one another, and compression sounds different applied to a R-3-5 chord than to a R-b3-b5-b7 structure.

I probably should have said “unprocessed” sound as opposed to “unadulterated” sound. Guess I was being a little dramatic there to use that word.

Just to clarify, the compression chart we’ve been discussing is not just only applicable to pure tones only. It applies to any kind of sound that happens to be in that frequency band.

I guess that’s something else I don’t understand. How broad is that frequency band, for instance?

Will one frequency band encompass a chord like Am7b5? (I’d have to look up the composite frequencies after you tell me the practical bandwidth.)

Most HAs go from as low as 125 Hz up to around 8 KHz. That is usually the complete frequency range the HA would operate on.

You can slice and dice this spectrum to either 8, 16 or 24 sub bands (usually called channels or handles) so that you can make adjustment on the gain and compression ratio for each of the channel independently of each other. For example, maybe the first band is from 0 to 125 Hz (as denoted by 125), the next band is from 125 to 250 Hz (denoted as 250) and so on. Or it can be a width centered around 125, 250 Hz, etc.

Below is an example of the 16 channels on the OPN S. With the More, you can choose to use 24 channels instead of 16 if you want to be able to fine tune with more granularity. A band may not be equal in width. As you can see below, the channels start out with smaller widths in the low end then widens up in the higher end. The HA mfg decides how to set the widths for these channels up front so it’s fixed.

A frequency band doesn’t follow any kind of musical rule to encompass a chord or anything like that. They’re just sectioned off in an engineering kind of way. The decision on how it’s sectioned off is up to the HA mfg, and depending on the choice of the # of handles/channels you want, like 8 or 16 or 24 for the Mores. Less handles/channels make it simpler for you to make adjustments on, but you lose the granularity that you may want to have.

Sorry, I’m thick - to what, exactly, does “that frequency band” refer?

When I posted those quotes from the Sea. Chassub ans Bauman, please remember to very important points of my search for HA’s: i’m playing a grand 6’ piano directly facing the interior with strings vibrating and soundboard resonating ranging in frequency from 27 to 1400… Music stand removed and all sound approaching my ears unrestricted.
These articles were posted on our website with subscribers using acoustic pianos and needing to wear HA’s.
This is radically different from listening to recorded music.
My Starkey S series HA’s of 11 years ago reproduces live acoustic piano sound perfectly— my former audiologist exclaimed of these particular HA’s —“ oh, they’re so linear.” That’s the picture I want to reproduce and sound of any current model of HA’s.

No problem. All of this is engineering “speak” anyway so there’s a lot of technical jargons thrown around.

If you look at the 16 “slots” ranging from 125 to 8K in the Gain Controls section that I showed in my previous post as an example, it represents 16 frequency sections, or “slots”. Each slot may span the same width as the next slot, or may be wider or narrower. I’m including it below again for easier reference.

For example, if you look at the 3k, 4k, 5k slots, they’re spread apart equally. So Oticon may have sectioned off these slots such that the width of the slot is centered around the frequency denoted. For example, the 3k slot may have the width ranging from 2.5k to 3.5k Hz (a 1 KHz width for the 3k slot). The 4k slot may range from 3.5k to 4.5k (another 1 KHz width slot). The 5k slot may be from 4.5k to 5.5k (another 1 KHz width slot). So those 3 frequency “bands” have a 1 KHz width for each of those band, and they center around the 3k, 4k, and 5k Hz marks.

Now Oticon goes from 6k to 8k, so the 7k is missing. But the bands need to be continuous with no gap in between. So the 6k slot may go from 5.5k to 7k (a 1.5 kHz width slot), and the 8k slot may go from 7k to 9k (a 2 kHz width slot).

At least, from 2k to 8k, the slots are spaced out 1 kHz apart, except for between 6k and 8k, they’re spread out 2 kHz apart. When you go to the lower frequencies, the slots are much narrower. The 125 slot and the 250 slot are only 125 Hz apart, the 250 and 500 are 250 Hz apart, etc. It’s not clear how Oticon sectioned off these slots (or bands). But however they do it, the end of the 125 slot would be the beginning of the 250 slot, the end of the 250 slot would be the beginning of the 500 slot, and so on.

The reason the lower frequency slots (or we can call them “bands” now) are spaced much tighter together compared to the high frequency slots is because there is a lot more spectral information packed in the lower frequency regions, so you want to control the gains and CRs in finer granularity there to achieve better control effect. Then as you progress to the higher frequency areas, the spectral information for the high sounds get thinner, so the gain and CR control for those bands can spread out more without sacrifice good control on the high frequency sounds.

Most sounds are made up of a complicated mixture of vibrations at many different frequencies. The sound spectrum displays the different frequencies in a sound for a fixed duration. And a sound has a duration as well. A spectrogram is a graphical representation of sounds where the Y axis is the frequency and the X axis is the time. Below is an example of a spectrogram for the spoken words “nineteenth century”.

If you slice up the frequency range in the Y axis into the 16 continuous bands like Oticon does and apply gains to it in accordance to the gain prescription, based on the gain and the compression ratio in each of the band, then that’s what you end up hearing from your HA. So the Output/Input chart that I showed earlier displaying the gains and compression ratios between the 3 knee points is just how gain is amplified for 1 of the 16 frequency bands. And for the More, you can increase this granularity into 24 bands instead of just 16 or 8 as well.

image

1 Like

Thank you, @Volusiano! This is altogether a very useful post, but what I have quoted was my “missing link”! I now understand the point you were trying to make.

Now, let me cipher on this a bit. I understand the implications for compression from a HA engineer’s point of view and versus a musicians’s compression. I think you were correct in an earlier post, where you intimated that these are really two different animals.

@colorrama88: Thank you for your recent, helpful posts! The quote above is revealing, and right on the money. Guitarists in general, and acoustic guitarists, in particular, don’t talk in these terms, but - perhaps they should.

[Nota: The objective of this post is to describe, in analogue terms, the importance of linear gain and compression in creating a balanced and pleasant musical experience, as opposed to listening to speech. It’s not intended to be an off-topic , tangential discussion of guitars.]

Please, bear with me as I give a short example related to one of my instruments, which I play with the guitar situated on my lap, soundboard facing 180° away from me. (It’s a C F Martin forward-shifted, 1/4" scalloped-braced herringbone D-35.)

For non-guitar-players, this is a guitar with a big soundbox. The internal bracing of this box is thinner, and more delicate (therefore more responsive) than conventionally-braced models. The positioning (shifting forward) of these braces inside the instrument creates a much larger section of unsupported top table between where the strings are connected (the bridge) and the end pin, where the strap connects.

This can be good and bad: if one is unlucky, the top of an instrument built on this pattern can vibrate in such a way that it interferes with itself. In this case, some notes sound louder than others, while others fade away far more rapidly than their neighbours, which continue to ring, after they are plucked. (If you’re lucky, and have a good specimen, the opposite is true, and the guitar will play with equal loudness and resonance across its entire useable range.)

My HD-35 is one of those exceptional instruments that sounds totally linear, from its lowest note to the highest. No one note predominates over any other, and there’s no interference, or phase cancellation occurring either on the top table, or anywhere else in the internal latticework that supports the guitar. The result is that each note and chord seems to “bloom” - when you touch the instrument, which is entirely acoustic. It responds instantly when touched softly, but the sound takes a long time to die away.

Here’s where compression comes into play, and where the synergy or interdependence of frequencies that comprise the tone are also important - if one strikes the strings of this instrument really forcefully, it will, of course, sound louder, but - only up to a certain point! It appears to be self-limiting. You can’t get it to produce a super-loud, strident peak, because there’s actually an analogue compression phenomenon happening. The vibrating top of the guitar is literally compressing the air inside the body and creating a back pressure that effectively dampens the notes whose attack volume would otherwise be too great.

And that same volume of air that’s trapped inside the instruments has a bit of a natural tendency to resonate, like when you slap on the side of a plastic 45 gallon drum. This natural resonance actually makes very softly-played notes to sound louder than one would expect them to be, given how little energy is being used to strike the strings. This is the other side of compression.

Electronic compression is, I suppose, trying to accomplish pretty much the same thing - attenuate the loudest notes, so they don’t overwhelm the rest, and permit the softest notes to speak loudly enough that they can be heard. It’s much more complicated than this, of course, but you can grasp the gist of it.

The key thing here is that everyone, without exception, that sits in front of my Martin and listens to it sing, appreciates the quality of its voice. That’s because the guitar is totally linear, and also possesses just the right amount of natural, analogue compression. And that’s why I understand and appreciate @colorrama88’s quote about his Starkey instruments being linear.

So, while the engineers may be vastly improving our comprehension of speech by applying different amounts of compression and gain to different frequency bands, as articulated by @Volusiano, when it comes to the sound of Music, our brains seem to be hardwired to prefer - (here it comes!) - linearity (which is, BTW, one of the principal failings I an hear in the MyMusic program.)

All we’re asking for is a native music program that works, in addition to the other amazing programs that allow us to understand soft-spoken children or pick out friends’ conversation in a crowded restaurant! So perhaps the HA industry should just acknowledge that musicians and “speechies” are potatoes and pears, and give us a device that will allow us to choose which mode is of greater importance to us, at a given instant in time, rather than pretending like one shoe fits all.

[This is only my opinion, YMMV.]

1 Like

@Volusiano: What do the burgundy-coloured numbers in the tables represent, please?

They are the gain values in dB to be added for each frequency band. And the gain values are different depending on whether the input is Soft (0-45 dB input volume), Moderate (46-65 dB input volume), or Loud (66-80 dB input volume). So you can call the Soft, Moderate and Loud rows the Input Loudness Category if you want.

The gain is added linearly in each category, but the compression ratio is factored into the gain addition,. So if the compression ratio is 1, then the whole gain is added to each input volume across that range. But if the compression ratio is > 1, then gain is tapered off and only a fraction of the gain is added to each input volume across that range.

2 Likes

Thanks. As I explained last night - I have to cipher on what you’re saying and try to turn the information into questions for myself such as : what would my HD35 sound like if the loudness and sustain of certain notes/frequencies were manipulated so that the sound wasn’t linear.

I’ll give you an example. When I’m playing a melody line, simultaneously with a harmony in thirds, and a bass line moving in the opposite direction as the melody (eg. melody is getting higher in pitch - bass line is moving down), I have to vary my touch (picking attack) so that the loudness of the harmony and bass lines are always a bit softer in volume than the lead line (melody).

The ability to make such subtle changes in my right hand finger movements is part of my art, and is what gives listeners the impression of more than 1 guitar playing. In effect, I’m making my linear guitar sound in a non-linear way.

I’m beginning to suspect that the reason musicians who impart their own dynamics to music by means of their technique want HAs to behave in a linear way is because if the hearing instruments are also applying their own non-linearity to the music, we get “double-dipping” that makes the softest notes sound too soft, and the louder ones in a passage too loud. This would account for perceptions of lack of detail and “edginess”, respectively, because there has been a duplication of digital modulation applied over analog modulation. By reason of the summation of analog and digital modulation, this results in too much attenuation and/or amplification of the parts of a musical passage. Or at least, that’s the direction in which my ciphering is taking me ATM.

[@Volusiano: do you think it’s reasonable to posit such an additive effect?]