AirPod Pro 2 as hearing device

Obviously I have hearing problems and also serious ones (just see the audiogram under the nick) :pleading_face:

So, if I understand correctly, only the iphone can be used as an audio/video source, right?

If you want spatial hearing on the airpods, you’d need the iphone and a subscription to Apple Music. My understanding anyway.

You have severe loss up to 2 KHz and then profound loss after that.

With hearing aids, you can probably use frequency lowering to recover the > 2 KHz loss into your audible range at up to 2 KHz. With the AirPods Pro 2, there’s no frequency lowering like with hearing aids, so you can get amplification up to 2 KHz, but after that, there’s no “recovering” to be done. Also, your loss between 250 Hz to 2 KHz is fairly flat, so effectively, even without any kind of audiogram accommodation feature enabled on the AirPods, simply turning up the volume on the AirPods “flatfly” (like for normal hearing people), albeit loud enough for you to hear, will basically be the amplification you need. Of course that same “flat” level of amplification above 2 KHz from the AirPods will not be good enough for you in the high range, and it’s very unlikely that the AirPods can amplify in that high range loud enough to compensate for your profound loss there anyway.

What I’m trying to say is that if you use the AirPods for streaming, its audiogram accommodation feature is pretty much useless for your case → you’re always going to miss the highs above 2 KHz anyway, and the flat amplification between 250 Hz to 2 KHz that will work for you will be pretty much what the AirPods does without audiogram accommodation, flatly, for normal hearing people anyway. You’ll just have to crank up the AirPods volume much louder than normal people do to hear it.

What is not clear is whether there’s anymore “headroom” left to be had if you use audiogram accommodation or not. By that I mean, let’s say without the audiogram, the AirPods will amplify flatly up to 100 dB only. But with the audiogram, and you’re telling it that your loss is at around 70 dB, then it will allow the AirPods to amplify up to 170 dB for you, which is highly unlikely that it can amplify that much, but let’s say its max is 120 dB. So if without audiogram accommodation, you only get 100 dB, but without audiogram accommodation, you’ll get 120 dB, then OK. audiogram accommodation still makes a 20 dB difference for you. But if 120 dB is the max it can and will amplify, regardless of whether there’s audiogram accommodation or not, then you might as well not bother with setting up your audiogram accommodation on the AirPods.

So, IF audiogram accommodation does not make a difference in YOUR case, then you’re not really just limited to using the AirPods with the iPhone only (to take advantage of audiogram accommodation). You’ll be able to use the AirPods with any BT devices. So the remaining question is the Spatial audio feature. If you want this, then you’ll probably still be limited to using the AirPods within the Apple ecosystem as long as the Apple device in this ecosystem supports Spatial Audio. Below is a screenshot from Google on which Apple devices support Spatial Audio:

3 Likes

I’m a big fan of “mirroring”, using the Apple devices to ‘cast’ to my Sony TV (Apple TV enabled).

1 Like

Dr. Cliff as just released a YouTube video today detailing his review of the AirPods Pro 2 as a hearing aids. For those who don’t want to spend the time watching the entire video but wants a summary, he pretty much did the same thing he did with the AirPods Pro 1 (first gen), enlisted his assistant who has very mild hearing loss, did REM and found pretty much the same thing as he did with the AirPods Pro 1, that in the Transparency mode, the amplification is also not up to par with his assistant’s hearing aids’ performance. It really falls short on the amplification in the mids and highs where his assistant could use some help with. And to confirm, this is already with the audiogram accommodation already entered into the iPhone. He played around a lot with the Transparency parameters like the Tone (Dark to Bright), the Amplification volume, and the Balance (left to right) to improve it, and was able to improve its performance enough, but not anywhere near the target level.

Near the end, he concluded that the APP2 performs horribly as a hearing aids even for people with mild hearing loss like his assistant. But what he failed to understand is that the APP2 Transparency mode doesn’t take into account the audiogram accommodation like it does with streaming. That’s why streaming has enough and proper amplification in the mids and highs, thanks to the audiogram accommodation, but Transparency doesn’t take into consideration the audiogram accommodation, so that’s why it performs much worse in this mode than in streaming.

4 Likes

I see in Headphone Accommodations that there is an on/off setting at the bottom for APPLY WITH: Transparency Mode. For me, I see that my iPhone toggled it off at some point in time, but absolutely it’s working with that setting on.

The iPhone is just needed to initiate it and re-enable it if it turns off. I’ve had it turn off on me for unknown reasons from time to time.

I never said that the Transparency mode doesn’t work. Of course it works, and of course it’s not turned OFF by mistake. What I said is that it doesn’t support audiogram accommodation. This is based on my personal experience and what a Tier 2 technical support person at Apple told me which confirmed what I experienced.

So it’s not about whether I accidentally had the Transparency model turned off by mistake or not. I have the Amplification in the Transparency mode turned to the max volume, so if it’s OFF by accident, there’s no way I would have not known that. I can hear the Transparency mode amplification working, but because it’s a flat amplification and not based on my audiogram, I only hear enough amplification in the lows where my hearing is still moderately able.

For somebody who only has a mild hearing loss, the flat amplification in the Transparency mode, especially if set to max volume, would probably indeed give then a noticeable boost all across the spectrum, fooling them into thinking that if there’s amplification, then it’s must be their audiogram-tailored amplification, even if it’s just a flat amplification. But for somebody with a severe hearing loss in the mids and highs like me, it’s very obvious right away that its flat amplification is not adequate at all for my mids and highs. But in the streaming mode, I get adequate amplification in the mids and highs, which tells me that streaming does have the audiogram knowledge and supports it.

There has been more than just the Dr. Cliff’s video on YouTube. Other HCPs have also made YouTube videos showing REM results for the APP2 or APP1, and they all have concluded that the APP1 and/or 2 does not amplify enough to target even for an audiogram with just mild hearing loss. So you gotta ask why can’t the APP be able to amplify enough to target per the REM test, if it indeed supports audiogram accommodation, even for a mild audiogram? The only logical conclusion is that it’s because the Transparency mode doesn’t support and amplify to the audiogram. It only amplifies without the audiogram knowledge.

In the Dr. Cliff video, he said that he went to great length to mess around with the Custom Tranparency Mode settings like Amplification, Tranparency Balance, Tone, Ambient Noise Reduction, Conversation Boost in order to try to get the APP2 Transparency mode amplification closer to the target gain of his assistant’s moderate hearing loss, and it helped a little, but still no cigar on matching target. But those settings are NOT meant as tools to meet the target gain in the first place. They are only meant as tools to improve the user’s personal preference. A normal personal doesn’t even have Dr Cliff’s REM tools to even experiment like he did anyway to try to meet the target curve. So what Dr. Cliff did there was putting the cart in front of a horse to get what he wants, and he still couldn’t get what he wants nevertheless with his professional tools.

1 Like

[edited]
That wasn’t my claim (that you said Transparency Mode doesn’t work). I’m saying that all of the Accommodations (including Audiogram) work (to a degree) in TM for a range of use cases, and that the Audiogram is, from my experience, being taken into account.

I’m more of the empirical sort and give more weight to the measurements and my own experience than customer service messaging.

I’ve left all of the Transparency Mode customizations at default and am only applying or not applying the Audiogram accommodation. It’s a stark difference to me (I can tell that TM isn’t just applying scalar gain), but I’m not in the severe/profound range in the mids/highs. Cliff Olson’s test subject has mild/moderate HL, as do I.

I’d agree that the APP2 is likely totally inadequate for the severe/profound use case, and behind proper HAs for mild/moderate, based on Dr. Cliff’s measurements. I don’t know how much of that difference is hardware vs. software, though there are likely tradeoffs in that area for an earphone with a mid-tier cost of goods (with the usual Apple margins) that isn’t targeted for the HA market.

On its face, APPLY WITH: in Headphone Accommodations refers to one of four modes (Audiogram/Balanced Tone/Vocal Range/Brightness) being applied to Phone/Media/Transparency Mode, where Transparency Mode has additional customizations available on the next screen.

By the way, Ambient Noise Reduction and Conversation Boost are both worthwhile features to experiment with in Transparency Mode. It’s like a lite preview of some of the once-unique features of digital hearing aids. They are only available in Transparency Mode, so if one wants something similar for phone calls on an iPhone, iOS 16.4 has recently brought the FaceTime Vocal Isolation to phone calling; you have to invoke it in the Command Center after a call is started, though.

Yeah, I believe that TM can make a stark difference for you, and probably many others who have milder hearing losses. But like I said in my previous post, it’s most likely not because it meets the specifics of your personal audiogram down to the T like you might think, but it’s simply because the amplification in the TM, even if flat and not personalized to anybody’s audiogram, is noticeable enough by your hearing that you find it adequate because your hearing loss is only mild or moderate.

An crude analogy is perhaps the old generic TV amplification devices that were sold to people years ago to help them hear the TV better. Those devices were simple amplification devices that weren’t personalized to any audiogram. But they’re probably not completely flat, but maybe boosted a little more in the mids and highs because that’s the common areas where improvements can be made. So people with mild hearing losses find them useful enough to buy them to watch TV with. Even people with more severe losses might still find them useful as long as they crank the volume up a bit more, because it’s still better than nothing.

The APP2 in TM is similar to this analogy. Its normal no-audiogram TM amplification is probably not 100% flat, but probably favors a little more yet still milder gains in the milds and highs (to accommodate for the natural resonance mentioned in one of those videos), which is what showed up on the HCP’s REMs in the YouTube videos. But it doesn’t mean that it’s personalized to an individual’s audiogram. Hence it still wasn’t enough to match the target gain curve. So for folks with a mild loss, it’s OK (but likely not necessarily great). But for folks with moderate to severe loss, it’s straight out no good as a hearing device.

At the risk of sounding like a broken record, the bottom line is that the APP2 may fool mild hearing loss folks into thinking that it does support their customized audiogram, but it actually does not. The telltale sign here is that moderate to severe loss folks (like me) can tell right away that the APP2 TM mode is not good enough as a hearing device for them. But if it’s good enough with these moderate to severe loss folks (like me) for streaming, but not for TM, then the only conclusion is that TM does not support their audiogram like streaming does.

I’m paying $5K for Widex Moment 440 sRIC HAs, with my first fitting next Monday. If I thought the APP2s were adequate, I wouldn’t be doing that. I’m definitely not getting HAs to improve on APP2 streaming.

I’m not claiming that the accommodation for streaming vs. transparency is equally effective - the measurements are showing an attempt at accommodation, but the transducer chain is more complicated for the latter. I wouldn’t be surprised to find that the iPhone is doing the heavy lifting for the streaming accommodation, and that the H2 SoC has a different capability on its own.

Do we have some data for the streaming compensation that can be compared to the transparency compensation?

I agree with you that the H2 chip onboard the APP2 does the Transparency processing (along with the ANC as well, because both are mics related), separately from the iPhone who does the streaming processing (along with the spatial audio and audiogram support, which are more content related), because it makes obvious logical sense that that’s how the flow goes.

The key question is whether the audiogram does get uploaded from the iPhone to the APP2 or not? The Apple Tier 2 technical support guy told me not, and that’s why there’s no audiogram accommodation for the Transparency mode (this is coming right out of the horse’s mouth). Obviously some of the basic settings like volume control and ANC on/off or TM on/off can be done from the iPhone and get relayed over to the APP2, and also the other way around. But the audiogram data may not be simple binary data like that. But even if it were simple enough data to upload to the APP2 H2 chip, the question is whether the H2 chip has the processing power like the iPhone to process and accommodate the audiogram adjustment or not. I’m in the camp that believes that the audiogram does not get uploaded to the APP2 like the Apple Tier 2 tech said it doesn’t.

I’m not aware of any data for the streaming compensation that can be compared to the transparency compensation, short of the anecdotal experience from people like you and me, which are only personally subjective and not objective. I would love it if one of those HCPs who did their YouTube videos had thought about collecting data for the streaming compensation in addition to the transparency compensation, but none of them did. But then I’m not aware of any established verification method for streaming, only for REM. So it’s understand that all they did was REM with the assumption (correct or not is up for debate) that the TM supports the audiogram.

My question for Apple or those doing REM with the APP2 would be the following: how does the default TM “Audiogram” compensation relate to the actual audiogram data? Dr. Cliff’s TM measurement vs. his subject’s uncompensated canal resonance (see below) isn’t what I’d call flat amplification. It does look somewhat flat between 250 Hz and 1 KHz, but it’s getting over halfway to target just below 7 KHz. Between 2-5 KHz, it’s not doing much, so I don’t see any benefit for speech in the plot.

My personal anecdotal experience is that I’m hearing high-frequency content that I don’t normally pick up (wood floor and carpet sounds under my feet, birdsong, whistling of my heater vent, etc). I wouldn’t say speech quality is improved, and though I haven’t tried Conversation Boost with an actual person, it didn’t improve my comprehension of the announcers of a TV baseball broadcast last night. TM did help, as did some raising of overall amplification in the TM custom settings, though the latter was analogous to raising the TV volume.

I haven’t experimented with loading different audiogram data to see, as I was satisfied with the streaming audio quality and didn’t feel the need to tweak that. I don’t have the same comparative baseline for TM, as I don’t yet have HAs or anything that’s verified to my prescriptive target.

The H2 in the APP2 is doing a lot of work with regards to “adaptive transparency”, in terms of amplitude compression, and is obviously capable of dynamic EQ (though some of that could be hard-coded) and does store some measure of EQ data for the TM customization.

It should be able to get closer to target, in my opinion. I wonder why it doesn’t.

1 Like

I found a post from Dr. Abram Bailey (owner of this forum) back in Sep’20 about the APP 1 just when it came out. The presentation he did was actually excellent, I think. Based on his data provided, I think he was able to prove that the APP1 definitely performs different when the audiogram is loaded into the iPhone, so I guess I’m going to have to eat my words that I thought the Transparency mode didn’t load the audiogram.

He did a couple of test audiograms. The first audiogram is a normal typical ski slope and the second audiogram is a reverse ski slope where the lows are a big loss. With both test audiograms, he showed that the Transparency mode compensation using the audiogram is inadequate, even after the ad-hoc adjustments he had to do in the customization page. Without the ad-hoc adjustments, the Transparency compensation seems to be nowhere adequate.







3 Likes

Glad you found that, @Volusiano. And I praise you for your curiousity and scientific spirit. It has been my impression that my AirPods Pro 1 boost sounds according to my audiogram. They definitvely help me understand people talking around me. I would say they are decent, they do help me get by if I am not wearing my hearing aids. We talked about this in other thread, probably @mikehoopes and I have a more positive perception of the aid the TM provides due to our mild to moderate hearing loss.

2 Likes

Thanks @e1405 for your comments. My biggest disappointment is surprisingly not that the APP1 and APP2 don’t compensate enough even though they can read the audiogram and try (poorly) to compensate for it in TM, but my biggest disappointment is that even the Apple Tier 2 technical support people don’t know what they’re talking about and simply told me the wrong thing (maybe what they think I want to hear) just to shut me up and be done with me.

I don’t know if it was done deliberately or through ignorance, but maybe probably both. Through ignorance because the guy doesn’t really know what the real answer is in the first place, and deliberately because if he had said yes, the TM does support audiogram accommodation, then he knew he wouldn’t be able to explain to me why then I wasn’t able to hear the compensation as well as I could with streaming. So the easiest answer for him is NO, then he wouldn’t have to answer any more follow-up questions that I might have had.

1 Like

I was replying to another thread about bicycling in the wind and it occurred to me that you can BOTH wear the hearing aids (as long as it’s RIC style with just a small wire going into the ear canal) and the AirPods at the same time. This doesn’t solve what we’re talking about here, whether the AirPods can be used as a hearing device via the Transparency mode. But if it’s not about replacing your hearing aids with the AirPods, but if it’s about being able to hear the ambient environment around you WHILE using the AirPods to listen to streaming content, then apparently it’s possible to wear both at the same time.

Of course you’d connect the AirPods to your iPhone, and the hearing aids is not connected to the iPhone. And you probably want to put the AirPods in the ANC mode because you don’t want the AirPods Transparency mode to compete with the hearing aids that are already providing the ambient sounds for you. This way you can have and enjoy the best of both worlds in parallel.

P.S. Of course a caveat is that this wouldn’t work if you wear a custom mold that sticks out so that there’s no room to plug your AirPods in, or if you wear a closed dome without any vent to let the AirPods sound in.

Might this piece help?

By way of a comparison, I have been using the Nuheara IQBuds Boost for a number of years. These use an EarID routine to create a NAL/NAL2 setting for each of the ears. AFAIK, once completed, the NAL/NAL2 ‘prescription’ are stored onboard the IQBuds and process incoming real-world sounds to hit the ear, duly processed. The point I’m making here is that these devices amend incoming real-world sounds in accordance with that EarID prescription.

The IQBuds also allow BT streaming from multiple devices. EarID does NOT affect this BT stream at all-adjustment of the Equaliser on the streaming device impacts that. While streaming, the world can be shut off completely, or various levels of world can be allowed in as required. So, while streaming with world on, there will be two separate equalisers processing the sound- one for the world sounds, processing on the IQBuds, and one for the streamed audio, processed on the phone, TV or whatever.

I’ve laid this out as an explanation of how an alternative to the AAP2 deals with this, which might be helpful in clarifying how the Airpods work.

Hope this helps…

3 Likes

Thanks for sharing about the Nuheara IQBuds Boost. Yeah, I do remember there were a number of threads on it on this forum when it first came out as an OTC alternative to hearing aids. Frankly, I don’t think the AirPods Pro were designed as an OTC alternative to hearing aids in mind like the Nuheara buds were. I think the AirPods priority market is still for normal hearing people, but Apple were able to implement a few tweaks to throw in “some” level of audiogram accommodation into it as a by-product of the functionalities they happen to have available already, so they did it, but not as a full-fledge OTC alternative to hearing aids, but probably more as an afterthought. Nevertheless, they do have some other awesome features like ANC and spatial audio so that’s why it makes them wildly popular and won great reviews; just not as a hearing device substitute for environmental sound, but maybe only for streaming.

1 Like

Thanks, @Volusiano. I tried that but my AirPods barely stay in place without my hearing aids while I am working out. It is even worse with my hearing aids on. On the other hand I count myself as one of the lucky ones since the Transparency mode allows me to understand people reasonably well. Maybe that’s due to my flat loss in my left ear and 100% word comprehension score in both ears.

OK, yeah, I can see that if your AirPods can hardly fit into your ear opening in the first place, then this approach wouldn’t work out well for you like you said. Good thing that you have a flat loss in your left ear. I can see how that would make the AirPods transparency mode work out better for you. If I were you, there would be no need to attempt to wear both at the same time either.

But hopefully this suggestion can help out someone who has a much heavier loss like myself, and who doesn’t have an issue with fitting their AirPods into their ear openings badly. Thanks for pointing out that other caveat about the fitting issue that I didn’t think about.

1 Like