Comparing Oticon’s More 1 vs Intent 1

Usually the sound directionality is created with the binaural processing feature called the Spatial Sound, and it’s already been in the Oticon aids since day 1 of the OPN, and possibly in even Oticon models older than the OPN as well. It’s available in any of the 4 Intent tiers 1 through 4. I don’t see any mention of any new improvement done to the Spatial Sound feature in the Intent, so Spatial Sound couldn’t have been the source of your noticeable difference in the sound direction for the Intent.

I would venture to guess that it is the new DNN 2.0 in the Intent that is helping to make the sound directionality more pronounced than before. After all, the DNN 2.0, when recreating the sound scene, would probably also have to recreate the original placement of the sounds to accurately keep the integrity of the sound scene intact. So an improved DNN 2.0 might do a better job placing the sounds back where they belong compared to the DNN 1.0.

Just a guess, of course. Not a knowledgeable fact from anywhere I’ve read.

1 Like

Great write up!
I’m going to get fitted for the new earmolds/receivers soon, so it’s good to get these insights.
I think @cvkemp Chuck mentioned the Smart Chargers are to be available very soon, and I saw another post mentioning a f/w update, if you haven’t already.

2 Likes

@Volusiano, would you help me figure out something? You’ve read quite a bit about the technical details of Oticon hearing aids and I think you might have an answer or a well-informed guess. It might sound a bit heretical, but I feel like music is better with the mics set to “neural automatic” on the Intents (at least in quiet places, I have not tested that in noisy places yet). I have two crafted music programs (one for my guitars and one for my over-the-ear headphones) that I always used with the mics in “fixed omni.” With the Intents, music sounds a bit more colorful and lively with “neural automatic.” Do you know of anything that could explain why I hear this difference? Or perhaps it’s just my ears playing tricks on me?

Also, thanks for the advice on how to get used to the “frequency lowering” feature in the other thread. I’ve just started another attempt to get used to it with the intensity set to the minimum, and so far, so good.

I always think that the Directionality Settings in the Oticon Genie 2 software is kind of a parody to the Directionality Settings in other hearing aid brands’ softwares. That is because most other hearing aid brands actually use the traditional beam forming approach where they use the 4 mics (2 on each hearing aid for the RIC type aid at least) to manipulate the cardioid field to zoom in exactly to where they want to pick up the sounds in that area of interest. On the other hand, because of the open paradigm, Oticon manipulate their 4 mics to do beamforming in a different way, not necessarily to zoom into certain areas to pick up the sounds there and ignore the rest like how the other aids do it (because that would be counterproductive to their open paradigm), but actually to create a noise model and to use this noise model to apply a different kind of beamforming (called MVDR) to attenuate well placed noise sources → thereby “balance” the sound scene to let all other sounds (speech included) that are not well placed noise sources to have an increased presence, and then use a secondary noise remover like the DNN to go one step further and enhance the clarity of the voices.

For example, the screenshot below of the Phonak speech sensor shows how their speech sensor detects where the voices come from, then create a forwarding cardioid pattern to pick up sounds that can be very narrow (the left scene), or wider (the right scene), or 360 degrees wide / omnidirectional (the middle scene).

In contrast, Oticon use their 2 mics on each aid to create 2 different views → the first view is an omnidirectional beam that picks up all sounds (to support their open paradigm), and the second view is a back-facing cardioid pattern that picks up the side and rear sounds, which are normally considered “noise”, to make a noise model out of this view. Figure 3 of this screenshot below shows this. Then, to balance out the sound field, they use a beamforming technique called the Minimum-Variance Distortion-less Response (MVDR) algorithm to subtract well placed dominant noise sources (the cars in this example) that are found in both the back-cardioid noise model view and the omni view, to come up with a “balanced” view, which is the gray pattern in the Figure 4 of the screenshot below. Note that noise sources from the back get attenuated more aggressively than noise sources from the side, by design. This balanced view is now put through a secondary noise removal module to scrub the same noise model from the front speeches in particular. If there are voices detected on the sides or rear, then this front-speech noise scrubbing is cancelled to preserve the integrity of the non-front speeches. But this is only in the OPN/S. For the More onward, the DNN allows the noise suppression on the speeches to be done in a much more clever way without the compromise mentioned above for the OPN/S if rear or side voices are detected.

So perhaps by now you can see why I think that the Oticon way of doing beamforming (the MVDR way) is a parody (quite different) compared to the other aids’ brands’ traditional beamforming way. Oticon doesn’t really zoom in to pick up the sounds in the desired/focus area and thereby block out the rest. Instead, Oticon zooms in on only well placed dominant noises to attenuate them first to rebalance the sound scene, thereby being able to keep most other sounds that aren’t dominant noise sources in accordance to the open paradigm, then “polish up” the speeches further in the DNN to wrap it up.

So while Oticon gives you a Fixed Omni and a Fully Directional option, the question is "Does Oticon REALLY revert back to the REAL omnidirectional pattern or the REAL fully directional pattern like the other brands do, or does it try to “emulate” it somehow in some way inside of their DNN because their path to processing the sounds has been designed to flow in a different and set way already?

If they really “reroute” the data through a different path for Fixed Omni or a Fully Directional selections, then why do many people (me included) not notice much difference in terms of non-speech sound blocking when they use the SIN program or the MoreSound Booster? Perhaps their Fully Directional execution is just to “fake” it by giving more SNR contrast than normal for front speeches to give the illusion of having the traditional Fully Directional effect? As for Fixed Omni, if they’re just emulating it instead of really open up the cardioid field on the mics to 360 degrees, then maybe they’re just keeping the sound scene “as is” and not apply any SNR contrast to anything to give the illusion of a wide open sound scene?

All I know is that Oticon offer a Neural Automatic option and suggest that people use this in the main program, and if Fully Directional or Fixed Omni is chosen, then put it in a secondary program. So the implication is pretty clear that → 1) the directionality is manipulated in the DNN (hence the choice of the word Neural), 2) let the hearing aids choose how to set it because it knows best on how to get the best sounding experience out of the DNN, as suggested in their online help captured in the screenshot below.

The bottom line answer to your question is that I wouldn’t try too hard to make sense of why you’re not hearing what you’re expecting in accordance to the directionality setting that you choose, based on your understanding of how it should work, when it comes to Oticon aids. It’s probably best to experiment with both, and if your choice of the directionality setting doesn’t work well as you expect it to, or it seems to work the same as Neural Automatic, or if you still get a better result with Neural Automatic regardless, then it’s probably best to leave it in Neural Automatic and let the aids choose the directionality for you, simply because the aids don’t work in a conventional way anyway, when it comes to directionality in the first place.

3 Likes

In addition to what@Volusiano so eloquently put forth I’d like to add something:
When you invoke Neural Automatic directionality, it activates the whole MSI scenerio, which just maybe is coloring the sound with all the added Environmental classifications, and setting of the Sound Enhancer, and the residual processing it induces!
Maybe it sounds more rich, and colorful, but is it detracting from the true musicality?
Whatcha tink, guys?

1 Like

Very succinct. Thank you.

I think that @flashb1024 has single-handedly solved the puzzle here!!!

Even with NNS disabled for a music program, setting the directionality to Neural Automatic still allows the Sound Enhancer to be activated. But if directionality is set to Fixed Omni, then the Sound Enhancer is disabled. So @flashb1024 hits the nail right on the head here, I think, that the Sound Enhancer is adding volume to the music in the speech ranges, making it to be perceived to sound better (albeit no longer accurately authentic anymore).

To understand better the purpose of what the Sound Enhancer does, the screenshot below provides a good explanation of why the Sound Enhancer is there. It’s primarily there to balance the user’s preference relative to the amount of NNS that the hearing aids decide to apply for a particular environment.

For example, if a very noisy environment is encountered and the aids have to apply a high amount of NNS to suppress the surrounding sounds in order for speech to be understood better, this might suppress the surrounding sounds too much in favor of speech and the user might not like too much suppression of the surrounding sounds. In this case, the user can opt to choose Detail in the Sound Enhancer, in which case the 1 to 4 KHz range (which is primarily the range for speech sounds) can be boosted up in volume so that the user can hear more details of the non-speech sounds, while the relative levels between the speech sounds and the non-speech sounds are still maintained so that speech can still be heard more over the non-speech sounds. Of course the trade-off for this is that the volume of ALL sounds between 1-4 KHz is raised up a little bit, so the user will have to put up with a higher volume in that range in order to both understand speech better and still not lose the details of the non-speech sounds.

But for users who would rather not have to deal with too much volume of everything and don’t care to get the details of the non-speech sounds, then they would choose Comfort for the Sound Enhancer, in which case less details of non-speech sounds can be heard due to the lowered volume in the speech range.

Then of course there’s a middle ground which is the Balanced value for the Sound Enhancer. Nevertheless, as long as Neural Automatic is selected, even with NNS disabled, the Sound Enhancer kicks in and any of its 3 values will boost up the volume in the speech ranges, either a little more, some more, or a bit more.

I would think that for music listening application, Fixed Omni should be chosen, then the Sound Enhancer option would not be enabled at all, and you get a flat, unadulterated sound of the music as is. But hey, if through this inadvertent selection, you find that the music sounds a little richer with a little more volume boost in the speech range, but by all means there’s nothing wrong with doing so either, if you find that it makes the music sound more enjoyable for your taste.

image

1 Like

So many insights, thank you @Volusiano and @flashb1024. There’s lots to unpack and assimilate. I confess I’ll need to read this again and go through Oticon whitepapers to make sense of some of what you are saying.

That question is always in the back of my mind, bugging me lol. I am not an audiophile, but I do need to have a proper reference for music. Otherwise, I risk having my classical guitar sounding great while the acoustic is miserable (and vice versa). That actually happened in the past, before I was finally able to fine-tune my “guitar program”. I mean proper reference in the sense of having the right gain across the range, especially between 125 and 750 Hz. I think unfortunatelly I’d be a bit naive to expect accurate musical representation with hearing loss and digital hearing aids.

I A/B tested the mics’ directionality and noticed a more colorful music experience with “neural automatic” while playing my classical guitar in a very quiet room. Arguably, most of the sound my classical guitar produces intensity-wise is under 1 kHz, and there wasn’t any competing noise to be suppressed. As an empirical guy haha, I’ll compare the three options for “sound enhancer” (detail, balance, comfort) and boost the 1 kHz to 4 kHz frequencies with the mics set to “fixed omni” to see how it goes. I’ll report back :slight_smile:

Edit: I think you guys are absolutely right. I A/B tested again and what I hear seems to be some extra dBs in the 1-4 kHz frequencies. I was able to get my guitar to sound very similar to the “neural automatic/detail” configuration by setting the mics to “fixed omni” and boosting these frequencies by 3 dB. If it is only a matter of volume, I will stick with "fixed omni”.

1 Like

Thanks for this update, @e1405 . It was a hunch mainly because the whitepaper does say clearly that the Sound Enhancer provides dynamic sound detail when noise suppression is active. So I wasn’t very sure whether there would be any speech range volume boost by the Sound Enhancer in your case even though the Neural Automatic selected seems to have activated the Sound Enhancer in the Genie 2 menu, because after all, NNS has been disabled, so noise suppression is not supposed be active for the Sound Enhancer to kick in.

I think it is a bug in Genie 2 that it ties the Sound Enhancer activation with the Neural Automatic directionality setting. It should actually be activated ONLY WHEN the NNS selection is enabled, if the intention of its use as described in the whitepaper is observed.

Another unclear thing not mentioned in the whitepaper is whether the quantitative amount of volume boost in the Sound Enhancer is tied to the level of NNS applied for a particular situation or not. Meaning that if 0 dB of NNS is applied in a very simple environment where there is no speech is detected by the Voice Detector, are these gain bumps in the speech range (all 3 of them) “flattened out”, and as speech is detected and the environment gets more difficult with more sounds competing against the speech(es), would the bumps be boosted up in gain relative to the applied NNS value or not?

They say that the Sound Enhancer provides DYNAMIC sound detail when noise suppression is active mainly in difficult environments, but it’s not clear whether the DYNAMIC here implies changing gains based on applied NNS value, or DYNAMIC here only refers to the “bump” shape in the speech range being the dynamic change, but the absolute gain levels in the 3 bump curves remain FIXED regardless of the NNS level applied.

It seems like your empirical evidence in your experiment implies that the gains in the bump curves is fixed and not dependent on the applied NNS level, because you got the same experience as if you had Detail value in the Sound Enhancer simply by boosting the speech range by 3 dB manually in Fine Tuning for your guitar with the Sound Enhancer disabled, because in your scenario, there should be 0 dB of NNS applied since there’s no speech in that scenario for any NNS value to be applied to.

I played my classical guitar in a quiet room again, just like I did yesterday, and I did notice differences between the three levels of the “sound enhancer”. The “comfort” setting of the sound enhancer sounded the closest to the “fixed omni” setting, which makes sense given all you said. Maybe it’s just me and my weird brain, but if there’s even a 1 dB change in the 1-4 kHz range, I will hear it. I’ve gotten to the point where I can tell whether my hearing aids are well-balanced just by listening to the tone and intensity of the shutdown jingle and program change beeps. That surely is some kind of torture, no? :joy:.

2 Likes

That’s brain hearing acuity at its best! :wink:

3 Likes

I would have to agree that Oticon has every INTENTion of utilizing those tones to make us aware of our HA’s performance.
I often find myself changing programs just to confirm my volume levels are balanced, so you are not weird, or unique.
This has been a very rewarding thread @e1405 and Mr. V @Volusiano (as our old friend would say: Exactly…Mr. V) :sunglasses:

I’m scheduled to be fitted with the Intents on July 15th, so I’m going to be vacuuming up all the details I can before than!

3 Likes

Hi @Volusiano, I wanted to update you on my experience. After a few days of using “speech rescue” with the lowest intensity, I’m starting to feel more at ease with it and am noticing increased clarity in speech. I’ve just increased the intensity slightly in my right ear, as that ear would benefit more from it. It’s promising so far…

I also switched to the DSL v5 fitting formula, and it seems to be a very good match for my hearing loss and the Intents. I believe that both the “speech rescue” and DSL v5 have further improved my speech comprehension in noisy environments. I transferred these settings from the Intents to the Mores, and now I hear better with the Mores as well.

The contrast between fitting formulas is quite interesting. VAC+ excels in quiet places and with soft sounds, creating an illusion of a good setup. It’s overall louder, but this loudness and the compression scheme used seems to interfere with speech comprehension in noisy places (just my speculation). DSL v5, on the other hand, sounds quieter and a bit dull in calm places compared to VAC+, but it performs considerably better in noise. Maybe its (DSL) compression scheme and target gains seem to work better for my type of hearing loss.

I have significant tinnitus in both ears, so I’ll forgive myself for preferring VAC+ or NAL-NL2 all these years. However, perhaps now is the time to make the effort and let my brain adjust to DSL v5. This means not using soft gains to mask the noises in my own head. Moreover, as many have mentioned, DSL v5 can be a bit of a challenge, especially when it feels like your hearing aids are “scratching” your brain :wink:.

Regarding the comparison between More and Intent, my initial reaction has held true. In a nutshell, the Intents do everything the Mores do but with more definition, separation, and resolution. Some might feel the improvement is only incremental, I think all in all it is worth the upgrade. As for the OPNs, the gap between them and Intents (and in some situations even Mores) is significant in all the tests I’ve done (with the exception to my stripped-down music program).

1 Like

as a musician, I’ve noticed two thing when it comes to
–trying a new set of strings
–trying a new guitar.
Typically folks hear what the new strings or guitar CAN do that is better, or that fills in the gaps, of their old strings or guitar. And they rush out to tell the world how great their new stuff is.

Then: after several weeks or months, they begin to notice the lack or the downsides of their new gear. They begin to notice that the old gear performed better than the new in certain aspects.

I know guitarists who are constantly changing string brands in search of the perfect set. And guitarists who rave about a new guitar and sell it six months later. I’ve heard about folks who’ve been married five times and carry on affairs all the while. Guess what? there is no perfect lover.

I wonder if by some genie’s power someone had his or her original hearing restored, they wouldn’t miss the sounds provided by their no longer needed hearing aids. And yes, I’d take that bargain, as long as my soul wasn’t involved!

1 Like

So true… I’d pay big money to hear silence again or get rid of my hyperacusis, but that’s not going to happen. Hearing better in difficult situations is achievable though - albeit in small, gradual, teeny-tiny, incremental steps.

Thanks for sharing with us your update on the Intent experience. It sounds like having a VAC+ based general P1 program and a SIN program that’s DSL-based with Speech Rescue enabled, and a customized MyMusic program might be a sweet-spot setup for you. Either that or a customized DSL-based music program instead.

I only have one curious question left, whether you’ve tried with a program that has the highest available max NNS value and compare it to a program that has the recommended default max NNS value, to see if there’s any penalty in sacrificing some speech clarity to gain more speech understanding if the highest available max NNS value is applied, or whether it’s not worth it and using the default recommeded max NNS for your hearing loss is preferred?

This might have some implication on whether the tier 2 level of the Intent is good enough for some folks if the default max NNS happens to be lower than the highest available max NNS value and this default value already works for them. Then why pay for the tier 1 premium if they don’t get to ever make use of the highest available max NNS value that the tier 1 level affords them.

1 Like

Actually, I now have DSL-based P1, lecture, and music programs. In the remaining slot, I have a VAC+ program that I use for an extra boost with soft sounds, streamed TV shows, and podcasts. I’ve enabled speech rescue on my P1 and lecture programs only. I’m going to commit to DSL as my main fitting formula for a while.

The fact that I chose Oticon’s default “lecture” program over the “speech in noise” one is a good indicator of my answer, right? :upside_down_face: I’ve experimented with both ends of neural noise suppression (6dB and 12dB) and found that I understand speech better with less NSS. My lecture program is set with Genie’s recommended NSS target of 6 dB, and combined with its boost in mid frequencies, it provides extra clarity and speech comprehension. However, my P1 program handles noise decently enough that I rarely feel the need to switch programs (P1 is set with 8 dB NSS). I still haven’t compared both lecture and speech in noise programs set with 6 dB NSS though - I wish I had an extra 2 or 3 slots to streamline these tests :joy:. Anyhow, that’s my experience with mostly conductive loss in my left ear and mixed loss in my right. YMMV.

Thanks Jeffrey.

Wonderful post.

DaveL

(an aside: in grade 2 my music teacher leaned over and said, “Tone Deaf”. i was already in my third school and had moved half-way across the country three times.

I believed her until a year ago. I’m 77 now. A couple of years ago I bought a Uke. I can’t remember chords. I’m still acting tone deaf. I’ll try again.) I’m not tone deaf. But it sure is hard to learn chords.

1 Like

Hi @Volusiano. It’s been a few days since I’ve been testing the “lecture” and “speech in noise” programs side by side, and they both perform well in more complex situations. I’ll keep both for now, as they have their strengths (echoey places: lecture; very noisy places: SIN). However, as I said before, my general program with the Intents usually handles everything quite effectively.

I’ve settled on 8 dB for neural noise suppression (NSS) for both my P1 and SIN programs, while sticking with 6 dB for the lecture program. These settings follow Genie’s recommendations for my hearing loss, except for the SIN program. I noticed that human speech sounds more processed with 10 or 12 dB NSS, which seems to make the SIN program less effective than my P1 in noisy environments, so I dialed it down from the recommended 10 dB.

I am more comfortable with “speech rescue” now. Although I am not using it with the strength Genie recommends, it’s enough to give me more clarity with human voices. Thank you for the tip!

I plan to keep my hearing aids as they are for about a month to let my brain adjust to the latest setup on the Intents. After many tweaks and tests, I feel like I’m in a good place now. If it weren’t for music (meaning “clear dynamics”), tiers 1 and 2 of the Intents would have been overkill for me. The 4D sensors are nice and seem to help in complex situations, but I’m not sure whether I’d pay the premium solely for them.

In the process of fine-tuning the Intents, I also managed to significantly improve the performance of my Mores (especially in noisy places). The Intents have a feature in the Companion app named “sound equalizer,” which helped me understand what adjustments I needed to hear better. For instance, I discovered that I was a bit of a “bass head” lol, and the EQ helped me determine the right amount of low-frequency energy needed before it started to mask the mid and upper ranges. Long story short, my Mores are still excellent hearing aids with some untapped potential. The gap between them and the Intents is probably not as wide as I first thought, but the Intents are nonetheless a step ahead (more clarity, resolution, separation in the soundscape, better for my right ear, “sound equalizer,” battery). However, the difference between the Intents and the OPNs is another story…

As many have mentioned, a good HCP and proper fitting go a long way. Alternatively, the DIY route might also work well, though it usually takes more time to achieve results comparable to those of a competent professional. In any case, only a proper setup would unleash the full potential of any hearing aid.

1 Like

Thanks for another detailed sharing of your experience with the Intent. Can you elaborate some more on what it means when you said that the speech sounds more processed with high NNS values? Do you mean like more robotic or something?

So you said that you settled with 8 db max NNS in your P1 and SIN programs because you think 10 or 12 dB is too processed for speech to the point of making these settings less effective and even detrimental to your speech understanding. So are you saying that if you’re in a super noisy environment where you feel that 8 dB max NNS is still not good enough to help you with speech understanding, you’d still rather stick with 8 because you think 10 or 12 is useless for you? I guess what I’m trying to get at is that in that situation, you’d still rather understand speech less at 8 than understand speech more at 10 or 12 but at the expense of speech clarity? I guess the gist of the question is speech understanding vs speech clarity → it seems like you’re saying that speech clarity is more important to you than speech understanding, meaning that if you lose that speech clarity that you expect to have, you’d rather not understand that speech at all than hear processed speech?

Oticon claims that it will apply only the appropriate level of NNS based on its determination of how noisy the environment is. But what is not clear to me is whether this claim is in the absolute sense or in the relative sense. What I mean by this is that let’s say you set your max NNS to be at 8 dB (although with tier 1 you can go up to 12 dB), with the absolute approach, if it has a table of what NNS value to apply to which noise level scenario, then it would follow this table until it reaches 8 dB NNS, and any noisier scenarios beyond this still get 8 dB max flat out. But if 10 or 12 dB max NNS has been selected, it would have gone up to these values to match with the noisier scenarios, up to 12 dB for the noisiest scenario it has on the table. On the other hand, with the relative approach, it would “scale” the NNS value to match between 0 noise and the noisiest scenario it has on its table of noise scale. Then the 8 dB would match with the highest noise scale in this table, and everything in between is “stretched out” to scale between 0-8 dB NNS.

The reason for this differentiation between “absolute” NNS value application vs “relative” NNS value application is that if the absolute approach is used by Oticon here, wouldn’t it be logical to just set the max NNS to the highest available in that tier and let Oticon decide the appropriate level of NNS to apply to each scenario? It’s because if Oticon decides that at a very noisy scenario, 10 dB NNS is the appropriate value for that scenario, then why would you want to limit to 8 if you can have 10, unless you’re saying that you’d rather not understand the speech if you have to sacrifice on clarity?

Of course if the relative approach is used by Oticon, then it’s clearly a different ball game and you can use the max NNS value as a way to define the clarity you want to get from speech because the relative “scaling” of the NNS value is a way to control how much speech clarity you want to have in the course of escalating the NNS to “deal” with the noisier and noisier scenarios.

Also, the fact that Genie 2 gives you the option to set what the max NNS value to be seems to strongly suggest that it’s a relative setting approach here. But of course we never know for sure unless we can get more details about this from Oticon, which we can’t.

I agree that the 4D sensors seems to be nice-to-have but probably not as critical as Oticon has hyped them up to be. But for sure they know how to get you by making Clear Dynamics available only on tier 1 and 2, or else many might have opted for the tier 3 and 4 than they would have liked to see.