Premium vs advanced hearing aids - Dr Cliff's YouTube video - what are your comments?

For those who don’t want to watch this rather long video, the gist of it is that Dr Cliff started out by saying that some research paper said that the differences between Premium and lower tiers hearing aids were found to be not significant enough to be worth spending the extra money on, with the paper stressing that the difference between the hearing care providers’ competence matters more.

But Dr. Cliff went on to do a 4-person single blind experiment to find out for himself anyway, with his own patients, between an Advanced and a Premium hearing aid brand (didn’t specify which brand/model). What he found out is that even though all 4 seem to report insignificant differences between the 2 tiers HAs, when asked which ones they would pick (without know which is which), all 4 picked the premium HA over the advanced HA.

Furthermore, all 4 were willing to pay $200 more for the HA they picked (I think still without revealing which is which), and when asked “what about $500 more?”, they still stuck to their choice.

There are many flaws with his study, starting with the very small sample size, but it’s still only a 6.25% chance that it was a random result that all 4 picked the same premium level HA. And the fact that they’d pay $200 (or even $500 more, possibly per HA and not per pair) for their choice shows that maybe there’s something to be said about a monetarily quantifiable intrinsic value of the premium tier that all 4 perceived despite rating them almost the same.

2 Likes

Just for reference, only for the Oticon More, based on Zip Hearing pricing online, they want $3200 for the More 3 pair (which is the Standard), $3800 for the More 2 pair (which is the Advanced, $300 PER HA more than the More 3), and $4800 for the More 1 pair (which is the Premium, $500 PER HA more than the More 2).

I think Dr. Cliff was talking about paying $200 or $500 more between the Advanced and the Premium PER hearing aid, but I’m not sure. Maybe he’s talking about the price difference for the pair, although it seems like a very small price difference for a pair, so I have to assume that it’s the price difference per hearing aid.

One person made an interesting comment on his video about his comment that he makes more money on the premium level than the lower tiers. Why should he? I guess if their cost is a percentage of the retail cost then it’s natural that the math works out to be more money for them to sell the premium version.

My comment about this video is that it seems consistent with my experience when trialing the OPN 3 and compare it against my OPN 1. I found the OPN 3 to perform very respectably, but I still found my OPN 1 to sound a little better with a certain “je ne sais quois” difference (“I don’t know what” difference).

But if I found the same difference like that between the More 3 vs the More 1, would I pay a $1600/pair price difference for the More 1 over the More 3? Probably not. But I may pay the $600/pair difference between the More 2 and the More 3, although I may not pay the $1000/pair difference between the More 1 and the More 2. It depends on which features they cripple between the versions and whether those features are important to me or not, I guess.

Hah.

This may often be the case, but it doesn’t HAVE to be the case. Depends on your pricing structure. I think I have seen Cliff say that it is not the case for him, but I cannot be sure. Ideologically, I think that it shouldn’t be the case.

He shares a nice anecdotal story. But I’ve also seen lots of patients who demo the premium tech and then purchase at a more comfortable price level and end up happy because they don’t notice much difference.

It would be nice to see things like this done more rigorously on a larger scale and I appreciate the attempt. Notably, some manufacturers slash functionality more agressively as you drop down so results with one manufacturer may not apply to another. It’s also likely the case that certain severities/configurations of hearing loss will experience tech differences differently than others (e.g. less benefit for patients in open domes).

I do think that he is on target with the “je ne sais quois” of premium tech, and I think it is valuable to capture that patients will select it and pay more for it even without seeing much functional difference. If that’s the case in more than 4 people.

As for bias in this video, I don’t know. When he started out, Cliff definitely seemed on the side of evidence-based practice and I’m sure that value is still there. But he’s also obviously interested in marketting, entrepreneurship, and making money. So if someone asked me whether I thought he was accurately representing real data I wouldn’t know the answer.

4 Likes

The problem that I find with this is the questionnaires asking for user’s perception of the aids. Are the controls the same during the period of evaluation for both aid A and B? If not, then the perception is then based on two different envionments.

Apart from the Quicksin test, what data is factual and able to be benchmarked? This would give us hard facts, rather than perceptions.

The conclusions of course are good for Dr Cliff. But I don’t think this is marketing. He says that clients have opted for premium level devices through their own volition over the years. One question could be - to what extent are people happy with their purchase because they know they have all these premium features (and they have paid top dollar for them) - a kind of confirmation bias if you will - even if in reality there is no independent data confirming the efficacy of each feature?

The other point I would make is that lifestyle factors are missing fron this equation. A premium aid may be better all around, but the situations it is superior in, may not have any relevance to an individual who is not very active.

3 Likes

The fairest test would be double-blind as it’s always possible in a Dr. Cliff-like test that the audi, since he/she knows what model is being fitted that the audi might accidentally introduce some user bias in offhand comments made during fitting, etc.

I didn’t watch Dr. Cliff’s video but not only would it be good to have a test with bigger numbers of participants but it would be also interesting to know if experienced HA users are different in their degree of preference vs. naïve, new first-time HA users. It would be interesting to know the results of a blind (or double-blind) test of premium models of HA’s amongst users, too, but I doubt we’re going to get that video from Dr. Cliff that 70% of his patients in a blind test choose Brand X as I think he wantst to keep his options open. He always finds something good to say about most premium brands.

1 Like

You may find some of the details you’re looking for in his video if you watch it. He gave a lot of details about his test setup that did not make it into my summary because otherwise it wouldn’t be a summary anymore.

1 Like

I thought this illustration from today’s Guardian was very à propos a discussion of HA costs

2 Likes

Not to match informal summaries of YouTube “research papers,” but for relative comparison, in real scientific papers accompanied by an abstract, the abstract will usually mention if a study was double-blind and give some information on the population studied as those are important features of a scientific study that manages to get published in a meritorious journal that help the readers decide whether it’s worth diving in the details of a paper or just skimming over the title, abstract, etc. In that light, Dr. Cliff just studying four patients might be all we need to know (remark intended as more of a quip than a severe criticism of YouTube video or gracious reader summary!). (I see that you now mention “single-blind” in your edited OP! (didn’t reread the rest yet) :+1:

Actually, I’d like to take the opportunity presented here in discussing “data science” in Dr. Cliff’s video to plug Dr. Tom Carpenter of Seattle Pacific University. He taught one of the online courses in Microsoft’s 10-course/project series in Machine Learning and Artificial Intelligence on edX.org (no longer available, though). However, he has a complete online course for free on YouTube, which looks very similar to the Microsoft course in content (the one video I looked at is identical to the same topic in the Microsoft course). Tom Carpenter’s Data Science Research Methods Course [Full Course] - YouTube

What he did in his course was great. He’s an excellent, funny, and reasonably entertaining lecturer. But he didn’t immerse the students in math and data. Instead, his mission was more, “Don’t be fooled by numbers! Think about the numbers and how they were obtained. Think about experimental design! Could the numbers be misleading you?!” And he gave lots of often humorous examples in the Microsoft course of how you could be lead astray by numbers you want to believe in!

For example, he discussed and illustrated how wrong you could be from just taking surveys from customers who walk in the door, say, at Walmart. You’re more likely to get responses from folks who are very happy or very angry, leaving out folks in the middle who are, meh, I’m okay but just don’t want to be bothered by a survey today.

Relative to Dr. Cliff’s A/B testing, he emphasized if you really want to do it right, you need two cohorts. Those who test A first, then B. And the reverse, those who test B first, then are offered A to test. As possibly testing A and B in a certain order influences the outcome. And he discusses why do that as opposed to having one group test A, another group test B, and then just having each group rate each product independently for its features.

The lecture that I found really made me respect his teaching ability is the one on false positives and false negatives (it probably helps to digest the earlier material on statistical power first). The last part of the lecture illustrating parts of an outcome tree that you’re on with false positives and false negatives was the most illuminating.

Data Science Research Methods | False Positives and False Negatives - YouTube

The bottom line is that Dr. Cliff’s experiment with only 4 participants probably doesn’t have enough statistical power (which is related to how the likelihood of being fooled by random variation decreases with increasing number of subjects) to avoid being fooled by a false positive, e.g., Dr. Cliff happened to pick subjects who from their makeup and past experience prefer the premium whereas if it he did the same experiment 20 times over, the average result would be different.

Carpenter’s message is be sure you have a good experiment design that fairly addresses the question you’re looking for an answer to and then be sure you have sufficient statistical power (subject numbers) that you can say it’s unlikely the answers (the responses you got) could be explained by random variation. An interesting thing that he teaches is the more black and white with no variation in outcome, say, two choices are, the less subject numbers you need to claim you found a difference but when the perceived difference between A and B are much grayer with overlapping response variation, you need larger numbers of subjects to claim a statistically valid result.

He also discusses at length correlation vs. causation and how one had better not fall into the trap of blindly turning a correlation into a causation - giving examples of how messed up you can get going that route.

Edit_Update: Watched the whole 1st video out of Carpenter’s YouTube course and a big Microsoft logo flashed on the screen at the end of the video - so the course lectures are straight out of a Microsoft course that once cost $100 on edX.org to take (there were course lab projects and quizzes to take for the edX version, though, too). Great course for teaching that one just doesn’t see numbers and jump to a conclusion. I see that I gave his edX course a plug two years ago, even back then citing the false positives, false negatives lecture: What a Joke - Aspirin

3 Likes

I watched Dr Cliff’s video on his trial with 4 clients to see if they preferred premium hearing aids (Tier 1) to advanced hearing aids (Tier 2).
I also have a PDF of the original report published by the University of Memphis about their experiment to see if people could distinguish between premium hearing aids (Tier 1) and basic hearing aids (Tier4) - this is the report that Dr Cliff referred to at the beginning of his video.
My comments are:

  • This experiment took place over the period 2011 to 2014 - 7 to 10 years ago so it is reasonable to question if the results are still valid given how hearing aid technology has changed over this period
  • The experiment was carefully designed and executed to generate statistically valid results to test the proposal that a set of people with the same type and similar levels of hearing loss could detect the difference between premium and basic hearing aids as regards speech understandability in a variety of situations
  • Dr Cliff’s trial was not capable of generating statistically valid results. It is not valid to quote probabilities in this context.
4 Likes

Could you please post the link to the U of Memphis study? Thanks.

I found it interesting. I don’t care about technology level as much as I care about good hearing out come with the hearing aids. After all that their job. I also want to make sure that the devices are fitted right using best practices like real ear measurements. Cost is a factor and sometimes you have to use a lower tech level to keep it affordable. I don’t have insurance that covers hearing aids and when it comes time to buy a new set I have to balance cost vs. benefits. Having options to choose from is important just as much as being reliable. Although the study was flawed and limited it did start a discussion on tech levels. It nice to see the topic being discussed. In my humble option it about keeping the cost down and the hearing benefits up.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4224118/

1 Like

@Goldfish: Thanks for the link!

:chair::chair::chair::chair:
SpudGunner

[EDIT: I’m a bit busy today, so I stopped reading at the point where it said:

  1. Only 4 hearing devices were used in the study, and;

  2. These devices were introduced in 2011, and still available at the time of the study (2014) (so they are dinosaurs relative to the HAs being discussed in our Forum today.)]

I don’t think that the article is current enough to warrant citing as relevant for today.]

1 Like

What the heck, I’ll throw in my 2 cents. I think IF the hearing aid companies had solid evidence that their premium models helped one hear significantly better, they’d be blasting it from the rooftops. Just about everything I’ve heard claims that the only differences are in how the hearing aids handle speech in noise. (The only thing different I’ve heard is that upper end Oticons have a bigger dynamic range) So, if one accepts that the big difference is speech in noise, and that there are major limitations on how well any hearing aid can deal with speech in noise, it makes sense to me to spend money on a remote microphone to supplement basic hearing aids.

6 Likes

I have used only two types of HA. A basic Siemens model provided free by our Health Service and what was top range Phonak. I note that the OP notes that users all preferred the premium over basic - naturally.
My first impression when I was given the Phonak last year was how ‘ordinary’ it sounded. In reality ‘natural’ would have been a better description.
Next was usability. Manually switching programs and volumes on both Siemens against automatic switching - no contest.
Then extras. Being able to tune the Phonak by an app was fancy and Bluetooth to my cell phone was excellent.
THEN I LOST THE RIGHT AID.
In the month between between losing and getting it replaced I had to use a Siemens in the right ear. I could barely hear using my cell phone. The TV, using the TV connector, was odd as I only had the left stereo channel. The Siemens, although small, was still twice the size of the Phonak. It worked but it was back to manual volume.
THEN I GOT IT BACK.
Now I noticed the difference.
OK, this is a comparison between top range and bottom. Primary function, ie hearing, was acceptable but the major difference was functionality.
If that top range double blind trial only looked at hearing the chances are the preference was random.
Now looking at my last year model Phonak and comparing functionality with the latest model I would happily pay the difference say up to $300 for the enhances blue tooth function.

@roybrocklebank. One possible confounding influence of wearing two different models of HA at the same time, one in each ear, is most premium pairs of HA’s communicate with each other, the better to hear. I know ReSound HA’s do through NFMI to coordinate what each HA is doing. Presumably, wearing different brands, your two different model HA’s would have been giving you an inferior listening experience for this reason alone - maybe one or both was confused if you didn’t visit an HCP to have each HA adjusted for having no mate while you awaited your replacement. Just a suggestion.

I agree with @MDB. As an example, I’m attaching below a feature comparison sheet for the ReSound One (without the in-ear M&RIE microphone, since there are 3 feature-level models for non-M&RIE HA, as opposed to only 2 models for the M&RIE-equipped HA).

When you look at the graph, it’s scary how much less you get for the low-end model. We want nothing but the best for our hearing but the table would only be really useful if it came with good footnoted explanations of what effect on hearing in what environments the bells & whistles are going to make a difference. The advice I’ve read on this forum in the past is that a basic model of a premium HA will probably work fine if you have modest to moderate hearing loss and live and work in simple listening environments whereas if you have a severe loss with need for stuff like frequency lowering and participate in a lot of gatherings, some in noisy environments, maybe some of the more advanced features that are left out of the low-end models will be helpful. But as MDB points out, you can reach a point where only a remote microphone device can help. I don’t have enough years of HA/audi experience to know if it works, but I would hope a good experienced HCP would actually learn on the job through patients for particular brands when a more advanced feature is actually useful, advise the patient, and save patients a bunch of bucks on stuff when they don’t really need the bells & whistles for less complex hearing needs.

Surveying users who’ve gone for the top-of-the-line is always dangerous because usually when you’ve blown a lot of money on something, you try to find reasons to justify it - like buying a luxury car when a basic car will get you there, too.

Demo of all the features you don’t get with most basic ReSound One model: (OMG!)

2 Likes

Jim, featurewise, when I was first being consulted about provision of an advanced HA those conditions ‘what do you need it for’ were at the top of her list.

regarding mixing the Phonak (Left) and Siemens (Right) was not an electronic clash as the Siemens is effectively a dumb manual HA and the Phonak essentially a lost slave* as it was the master right HA that I had lost.

  • is Master/Slave a permitted description in US?
1 Like

The master hearing aid can be set to left or right in Phonaks programming software. Default is right.

1 Like

Interesting. As I am right handed that is probably why she set it to right as the master. Had I know that I would have asked her to switch it to the left while I only had the one, then at least my phone would have worked.

I find these dedicated forums, I am in a Toyota one too, to be invaluable in learning how things work.