Trialing Oticon Intent vs OPN

Update: I visited my audiologist and told her that speech clarity is better than the OPN and that the MyMusic program is terrible. She said she had heard other people say the same. We did some tweaks to the General program and the Speech in Noise program but focused on music. She created a new Custom program using VAC+ and copied all the old numbers from the old OPN Music program into the new Custom Music program, as well as turning most functions off. She then created a separate Custom program that was relatively linear, similar to the old amplifiers that some musicians say is the best way to go for music. So I have two new music programs to try.

I immediately tried the two new programs with my violin and my cello. For reference, other people have told me that my cello has a warm, mellow tone and my violin is bright and brash - and this is how they sound to my ears with the old OPNs. With both of the new programs, the harshness of the MyMusic program was gone. The Custom Music program, the one similar to the old OPN Music program, sounded very warm, with both my cello and violin, and the Custom Linear program sounded accurate, almost clinical. I could take either program, but like Goldilocks, I would really like something in the middle. I might like to try the DSL program also. I am very encouraged that the music problem with the Intents is fixable.

Last night, however, I went to my child’s end-of-year school band concert. I tried both music programs, and overall, found it hard to clearly hear the individual instrument sections, the way I used to be able to. I’ve been to many school band concerts with the OPNs, and I turn them down since the music is quite loud, but can still hear and enjoy the music. But last night was different. I can think of a couple other things I could have tried, with the EQ settings on each of the Custom programs, and moving volume up and down. But I won’t have an opportunity to try any of those again. I suppose I could try finding some marches to listen to and turning up the stereo very loud? All in all, I enjoyed hearing my son play but overall the experience made me sad that I couldn’t hear as well as I wanted to, as well as I could with the OPNs.

The main difference in the sound processing mechanism between the OPN and the Intent is that the Intent uses a second generation DNN to “recreate” the sound scene, while the OPN has no DNN, so it simply presents the sound scene “as is” without breaking down and rebuilding the sound scene again. So in effect, the integrity of the sound scene is probably better “preserved” through the OPN because there is minimum alteration to the sound scene (at least with the original legacy built-in Music program with minimal processing), as compared to the integrity of the sound scene in the Intent, which is broken down and rebuilt.

So for a single or a handful of discrete sound components going on, one might not notice much difference between the 2 technologies in the OPN and the Intent. But for a more complicated sound scene of a whole band with lots and lots of musical instruments blending together, it might be harder for the Intent’s DNN to rebuild that sound scene as accurately as the OPN’s presentation of the sound scene “as is”. I hope this makes sense. I guess as an analogy, that’s why some folks bemoan the transition from analog to digital HAs taking away the authenticity of the musical sound because of some of the processing like compression and what-have-you. In the same vein as this, one can probably bemoan the transition from “as is” (albeit still digital) sound reproduction of a digital HA like the OPN, to AI sound reproduction by the Intent, where the recreated sound production by the AI (perhaps one can label it as “very good fake” reproduction) might take away some of the clarity of the sound complexity that only can be discerned from an “original” and not from a “fake” AI copy.

Of course, this is just my theory as to why you find a complex musical sound scene be rendered more superior by the OPN instead of the Intent. And it’s probably only more obvious to you because you HAVE the benefit of having something to compare against → you already knowing how it should sound like with the OPN, so this helps lead you to know what is missing with the Intent. For folks who went from a More or Real to the Intent, they might not know what’s missing like you do and still be as happy as a clam.

1 Like

Do you know whether you have those two music programs with the directionality set to “fixed omni”? If it is set to “neural automatic,” I guess @Volusiano has a point, and you could change it to “fixed omni” and test it again. At the moment, I have a pair of Opn 1, More 1, and Intent 1 around and the difference in the stripped-down music programs is minimal, which is somewhat expected since my crafted music program has all the “whistles and bells” toggled off.

Definitely second @e1405 's point that all music programs should have the Directionality Setting set to Fixed Omni. It’s almost a given that it needn’t even be asked, but I guess it doesn’t hurt to ask.

Thanks @e1405 for sharing that you find a minimal difference between your OPN, More and Intent 1. Maybe that disproves my theory of why the OPN’s music might be better sounding than a DNN-based HA like the More or the Intent.

I don’t know all the pathways that the audio signal goes through but if the mics are set to neural automatic in any music program it is very likely your point stands! MyMusic uses fixed omni.

I think the audiologist used Fixed Omni on both custom music programs when she changed some settings, turning many things off. I’ll check at my next appointment. I wonder if there are other settings to enable or disable or set a specific way that would reduce processing or otherwise improve the sound scene.

I’ll continue listening, making notes of what I like and what I don’t like, and continue working with my audiologist to get this figured out. We have definitely moved in the right direction with these custom music programs, and I am very encouraged by it.

It should be the same settings of the MyMusic. I go one step further and disable feedback management as well.

I wonder if you are not hearing lots of new sounds (that your OPN wasn’t picking up) and need some time to get used to the new way soundscape is presented to you.

1 Like

Again, to parrot @e1405 I’ll add this screenshot of the More Sound Intelligence screen which you can show your audi:

Notice Neural Noise suppression box is unticked, and Virtual Outer Ear is set to Aware, and Directionality is Fixed Omni.

Tbh, I have the More’s, and have all that stuff turned off, even in my General Program.
To me, Less is MORE!!
Too much digital AI processing can ruin the sound experience for me.

1 Like

By analogy, when driverless cars first came out, the response was, well how hard can this be to design? turns out, very hard indeed! A hundred decisions that human drivers make over a minute in city traffic can’t all be accurately predicted and coded into a program. Freeway driving; merging into traffic; braking and making split second decisions when driving in ice or when a kid leaps out into the lane after a ball or when the truck ahead gets a flat and begins veering wildly, etc.etc. In short, tasks that an experienced driver can accomplish cause driverless cars to make the wrong ‘decisions’. Of course, they’re not ‘deciding’ anything but merely following an algorithm, which is why they’ll never be convicted of manslaughter in court.
My point is that our soundscape is likewise far more complex than we’re really aware of. AI may well do an excellent job with helping us hear a voice in a noisy environment. Those parameters have been studied and adjusted for. Perhaps AI might be best as a dedicated program for just such situations. But for music? Nah. And for wandering about in the world–maybe nott. Too much going on!!! And anyway, yet more processing in those conditions are often not necessary at all.

1 Like

Driverless cars cannot be achieved solely based on human-made rules that is coded as algorithms alone, because like you said, there are way too many scenarios and environments to consider. I don’t think any company doing self driving takes this approach however. The approach they take is AI based machine learning. They feed millions of driving scenarios into a DNN and train this DNN to be better and better at driving. This way, the more data they feed into it, the better it gets at self driving. Also this way, they don’t need to be exhaustive about making sure they cover all the rules if it had been human-based algorithms, because it’s not feasible to use humans to exhaustively keep adding more codes and rules in. They just need to be exhaustive enough about feeding an AI-based DNN more and more driving data so that it can get better at self driving through the training using the driving data.

Tesla has millions of EVs on the road so far. Every single one of them is equipped with cameras that can be fed back into the Tesla network and used as driving data for its AI machine learning DNN. Yet, Tesla didn’t introduce neural-net-based vehicle controls until FSD V12 recently. It was also compute resource constrained that hampered its FSD training until recently. But with the compute resource constraint now finally cleared up, and the release of FSD V12 rolled out, it has gained much better reception and acceptance than previous versions of FSD that was not neural net-based, because it seems much more reliable now.

The same approach will have to do with hearing aids. Traditional processing can only go so far and will hit a road block at some point. But using machine learning, the DNN can keep on improving with more and more data until it’s almost perfect. It’s just going to take a while to get there. But we can already see that the DNN 2.0 in the Intent seems to be appreciably better than the DNN 1.0 in the More and Real. The limitation there so far is that it’s still a supervised training scenario, so it’s still limited by the resource constraints of a supervised training setup. Hopefully at some point when it can get around to having unsupervised training scenarios set up, its advancement might be able to increase ten-folds or more very quickly.

Update: with some small tweaks, I am happy with P1 including in noisy situations, speech clarity and comprehension are very good.

I dropped the custom music program P3 based off of the values from the legacy Music program on my OPNs because it wasn’t working for me. I modified the custom music program P4 that was roughly linear amplification to have a bit more bass and mid, also changed to Detail, and playing my violin and cello sound rich and full and beautiful.

As an experiment, I added a new P3 for music that is DSL v5 and it sounds really good for speech/general and for music. The P1 sounds a bit richer for speech and the P4 sounds a bit richer for music, while the new P3 sounds clear and detailed and a bit dry. Both of the music programs work well and I am happy with both of them. Not sure which one I would choose if I needed to pick just one. Which one sounds better, which one is more accurate? I don’t know. I’m still using both of them. Last night I was listening to music on my old B&W speakers on the old hi-fi and it sounded great, to my ears.

How is the quality streaming music? As opposed to listening to music?