AirPod Pro 2 as hearing device

[edited]
That wasn’t my claim (that you said Transparency Mode doesn’t work). I’m saying that all of the Accommodations (including Audiogram) work (to a degree) in TM for a range of use cases, and that the Audiogram is, from my experience, being taken into account.

I’m more of the empirical sort and give more weight to the measurements and my own experience than customer service messaging.

I’ve left all of the Transparency Mode customizations at default and am only applying or not applying the Audiogram accommodation. It’s a stark difference to me (I can tell that TM isn’t just applying scalar gain), but I’m not in the severe/profound range in the mids/highs. Cliff Olson’s test subject has mild/moderate HL, as do I.

I’d agree that the APP2 is likely totally inadequate for the severe/profound use case, and behind proper HAs for mild/moderate, based on Dr. Cliff’s measurements. I don’t know how much of that difference is hardware vs. software, though there are likely tradeoffs in that area for an earphone with a mid-tier cost of goods (with the usual Apple margins) that isn’t targeted for the HA market.

On its face, APPLY WITH: in Headphone Accommodations refers to one of four modes (Audiogram/Balanced Tone/Vocal Range/Brightness) being applied to Phone/Media/Transparency Mode, where Transparency Mode has additional customizations available on the next screen.

By the way, Ambient Noise Reduction and Conversation Boost are both worthwhile features to experiment with in Transparency Mode. It’s like a lite preview of some of the once-unique features of digital hearing aids. They are only available in Transparency Mode, so if one wants something similar for phone calls on an iPhone, iOS 16.4 has recently brought the FaceTime Vocal Isolation to phone calling; you have to invoke it in the Command Center after a call is started, though.

Yeah, I believe that TM can make a stark difference for you, and probably many others who have milder hearing losses. But like I said in my previous post, it’s most likely not because it meets the specifics of your personal audiogram down to the T like you might think, but it’s simply because the amplification in the TM, even if flat and not personalized to anybody’s audiogram, is noticeable enough by your hearing that you find it adequate because your hearing loss is only mild or moderate.

An crude analogy is perhaps the old generic TV amplification devices that were sold to people years ago to help them hear the TV better. Those devices were simple amplification devices that weren’t personalized to any audiogram. But they’re probably not completely flat, but maybe boosted a little more in the mids and highs because that’s the common areas where improvements can be made. So people with mild hearing losses find them useful enough to buy them to watch TV with. Even people with more severe losses might still find them useful as long as they crank the volume up a bit more, because it’s still better than nothing.

The APP2 in TM is similar to this analogy. Its normal no-audiogram TM amplification is probably not 100% flat, but probably favors a little more yet still milder gains in the milds and highs (to accommodate for the natural resonance mentioned in one of those videos), which is what showed up on the HCP’s REMs in the YouTube videos. But it doesn’t mean that it’s personalized to an individual’s audiogram. Hence it still wasn’t enough to match the target gain curve. So for folks with a mild loss, it’s OK (but likely not necessarily great). But for folks with moderate to severe loss, it’s straight out no good as a hearing device.

At the risk of sounding like a broken record, the bottom line is that the APP2 may fool mild hearing loss folks into thinking that it does support their customized audiogram, but it actually does not. The telltale sign here is that moderate to severe loss folks (like me) can tell right away that the APP2 TM mode is not good enough as a hearing device for them. But if it’s good enough with these moderate to severe loss folks (like me) for streaming, but not for TM, then the only conclusion is that TM does not support their audiogram like streaming does.

I’m paying $5K for Widex Moment 440 sRIC HAs, with my first fitting next Monday. If I thought the APP2s were adequate, I wouldn’t be doing that. I’m definitely not getting HAs to improve on APP2 streaming.

I’m not claiming that the accommodation for streaming vs. transparency is equally effective - the measurements are showing an attempt at accommodation, but the transducer chain is more complicated for the latter. I wouldn’t be surprised to find that the iPhone is doing the heavy lifting for the streaming accommodation, and that the H2 SoC has a different capability on its own.

Do we have some data for the streaming compensation that can be compared to the transparency compensation?

I agree with you that the H2 chip onboard the APP2 does the Transparency processing (along with the ANC as well, because both are mics related), separately from the iPhone who does the streaming processing (along with the spatial audio and audiogram support, which are more content related), because it makes obvious logical sense that that’s how the flow goes.

The key question is whether the audiogram does get uploaded from the iPhone to the APP2 or not? The Apple Tier 2 technical support guy told me not, and that’s why there’s no audiogram accommodation for the Transparency mode (this is coming right out of the horse’s mouth). Obviously some of the basic settings like volume control and ANC on/off or TM on/off can be done from the iPhone and get relayed over to the APP2, and also the other way around. But the audiogram data may not be simple binary data like that. But even if it were simple enough data to upload to the APP2 H2 chip, the question is whether the H2 chip has the processing power like the iPhone to process and accommodate the audiogram adjustment or not. I’m in the camp that believes that the audiogram does not get uploaded to the APP2 like the Apple Tier 2 tech said it doesn’t.

I’m not aware of any data for the streaming compensation that can be compared to the transparency compensation, short of the anecdotal experience from people like you and me, which are only personally subjective and not objective. I would love it if one of those HCPs who did their YouTube videos had thought about collecting data for the streaming compensation in addition to the transparency compensation, but none of them did. But then I’m not aware of any established verification method for streaming, only for REM. So it’s understand that all they did was REM with the assumption (correct or not is up for debate) that the TM supports the audiogram.

My question for Apple or those doing REM with the APP2 would be the following: how does the default TM “Audiogram” compensation relate to the actual audiogram data? Dr. Cliff’s TM measurement vs. his subject’s uncompensated canal resonance (see below) isn’t what I’d call flat amplification. It does look somewhat flat between 250 Hz and 1 KHz, but it’s getting over halfway to target just below 7 KHz. Between 2-5 KHz, it’s not doing much, so I don’t see any benefit for speech in the plot.

My personal anecdotal experience is that I’m hearing high-frequency content that I don’t normally pick up (wood floor and carpet sounds under my feet, birdsong, whistling of my heater vent, etc). I wouldn’t say speech quality is improved, and though I haven’t tried Conversation Boost with an actual person, it didn’t improve my comprehension of the announcers of a TV baseball broadcast last night. TM did help, as did some raising of overall amplification in the TM custom settings, though the latter was analogous to raising the TV volume.

I haven’t experimented with loading different audiogram data to see, as I was satisfied with the streaming audio quality and didn’t feel the need to tweak that. I don’t have the same comparative baseline for TM, as I don’t yet have HAs or anything that’s verified to my prescriptive target.

The H2 in the APP2 is doing a lot of work with regards to “adaptive transparency”, in terms of amplitude compression, and is obviously capable of dynamic EQ (though some of that could be hard-coded) and does store some measure of EQ data for the TM customization.

It should be able to get closer to target, in my opinion. I wonder why it doesn’t.

1 Like

I found a post from Dr. Abram Bailey (owner of this forum) back in Sep’20 about the APP 1 just when it came out. The presentation he did was actually excellent, I think. Based on his data provided, I think he was able to prove that the APP1 definitely performs different when the audiogram is loaded into the iPhone, so I guess I’m going to have to eat my words that I thought the Transparency mode didn’t load the audiogram.

He did a couple of test audiograms. The first audiogram is a normal typical ski slope and the second audiogram is a reverse ski slope where the lows are a big loss. With both test audiograms, he showed that the Transparency mode compensation using the audiogram is inadequate, even after the ad-hoc adjustments he had to do in the customization page. Without the ad-hoc adjustments, the Transparency compensation seems to be nowhere adequate.







3 Likes

Glad you found that, @Volusiano. And I praise you for your curiousity and scientific spirit. It has been my impression that my AirPods Pro 1 boost sounds according to my audiogram. They definitvely help me understand people talking around me. I would say they are decent, they do help me get by if I am not wearing my hearing aids. We talked about this in other thread, probably @mikehoopes and I have a more positive perception of the aid the TM provides due to our mild to moderate hearing loss.

2 Likes

Thanks @e1405 for your comments. My biggest disappointment is surprisingly not that the APP1 and APP2 don’t compensate enough even though they can read the audiogram and try (poorly) to compensate for it in TM, but my biggest disappointment is that even the Apple Tier 2 technical support people don’t know what they’re talking about and simply told me the wrong thing (maybe what they think I want to hear) just to shut me up and be done with me.

I don’t know if it was done deliberately or through ignorance, but maybe probably both. Through ignorance because the guy doesn’t really know what the real answer is in the first place, and deliberately because if he had said yes, the TM does support audiogram accommodation, then he knew he wouldn’t be able to explain to me why then I wasn’t able to hear the compensation as well as I could with streaming. So the easiest answer for him is NO, then he wouldn’t have to answer any more follow-up questions that I might have had.

1 Like

I was replying to another thread about bicycling in the wind and it occurred to me that you can BOTH wear the hearing aids (as long as it’s RIC style with just a small wire going into the ear canal) and the AirPods at the same time. This doesn’t solve what we’re talking about here, whether the AirPods can be used as a hearing device via the Transparency mode. But if it’s not about replacing your hearing aids with the AirPods, but if it’s about being able to hear the ambient environment around you WHILE using the AirPods to listen to streaming content, then apparently it’s possible to wear both at the same time.

Of course you’d connect the AirPods to your iPhone, and the hearing aids is not connected to the iPhone. And you probably want to put the AirPods in the ANC mode because you don’t want the AirPods Transparency mode to compete with the hearing aids that are already providing the ambient sounds for you. This way you can have and enjoy the best of both worlds in parallel.

P.S. Of course a caveat is that this wouldn’t work if you wear a custom mold that sticks out so that there’s no room to plug your AirPods in, or if you wear a closed dome without any vent to let the AirPods sound in.

Might this piece help?

By way of a comparison, I have been using the Nuheara IQBuds Boost for a number of years. These use an EarID routine to create a NAL/NAL2 setting for each of the ears. AFAIK, once completed, the NAL/NAL2 ‘prescription’ are stored onboard the IQBuds and process incoming real-world sounds to hit the ear, duly processed. The point I’m making here is that these devices amend incoming real-world sounds in accordance with that EarID prescription.

The IQBuds also allow BT streaming from multiple devices. EarID does NOT affect this BT stream at all-adjustment of the Equaliser on the streaming device impacts that. While streaming, the world can be shut off completely, or various levels of world can be allowed in as required. So, while streaming with world on, there will be two separate equalisers processing the sound- one for the world sounds, processing on the IQBuds, and one for the streamed audio, processed on the phone, TV or whatever.

I’ve laid this out as an explanation of how an alternative to the AAP2 deals with this, which might be helpful in clarifying how the Airpods work.

Hope this helps…

3 Likes

Thanks for sharing about the Nuheara IQBuds Boost. Yeah, I do remember there were a number of threads on it on this forum when it first came out as an OTC alternative to hearing aids. Frankly, I don’t think the AirPods Pro were designed as an OTC alternative to hearing aids in mind like the Nuheara buds were. I think the AirPods priority market is still for normal hearing people, but Apple were able to implement a few tweaks to throw in “some” level of audiogram accommodation into it as a by-product of the functionalities they happen to have available already, so they did it, but not as a full-fledge OTC alternative to hearing aids, but probably more as an afterthought. Nevertheless, they do have some other awesome features like ANC and spatial audio so that’s why it makes them wildly popular and won great reviews; just not as a hearing device substitute for environmental sound, but maybe only for streaming.

1 Like

Thanks, @Volusiano. I tried that but my AirPods barely stay in place without my hearing aids while I am working out. It is even worse with my hearing aids on. On the other hand I count myself as one of the lucky ones since the Transparency mode allows me to understand people reasonably well. Maybe that’s due to my flat loss in my left ear and 100% word comprehension score in both ears.

OK, yeah, I can see that if your AirPods can hardly fit into your ear opening in the first place, then this approach wouldn’t work out well for you like you said. Good thing that you have a flat loss in your left ear. I can see how that would make the AirPods transparency mode work out better for you. If I were you, there would be no need to attempt to wear both at the same time either.

But hopefully this suggestion can help out someone who has a much heavier loss like myself, and who doesn’t have an issue with fitting their AirPods into their ear openings badly. Thanks for pointing out that other caveat about the fitting issue that I didn’t think about.

1 Like

I’m finding that approach works ok for me (with my RIC HA with the Widex open round tips) if I don’t need ANC. You don’t get a completely full seal, according to the AirPod fit test, and bass leakage/attenuation is perceptible if comparing A-B. They are not quite as comfortable as they are without the RIC wires, or near as comfortable as HAs alone. Streaming sound and stability is much better, though.

I imagine Transparency and ANC aren’t intended to be combined with HAs, though I don’t see any big issues if one forgets to turn those features off. You could mute the HA mics, but at that point that would just be the alternative to putting them in a case or leaving them behind.

Good point that you may not get a completely full seal anymore with the AirPods when wearing both. However, in my personal case, I’m still able to get Good Seal result with the AirPods fit test myself. So I think this will vary depending on the personal fit for each person in the first place. I think I’m lucky that my ear canals are big enough to be able to seat the hearing aids’ receivers deep enough into the canal that there’s still enough room for the AirPods plastic tips to continue to make a good seal at the opening of my ear canals.

And I think if you no longer get a good seal like before without the hearing aids, then there’s also a higher chance that it won’t be as comfortable for you anymore with both on. It does continue to feel comfortable for me, though, but then I haven’t really tried to wear both for at least an hour yet because I just thought of this but haven’t had a particular need to wear both at the same time for a long time yet.

You’re right that wearing the aids together with the pods removes the need to put the pods in the Transparency mode because the aids will be giving you that now. You may also detect a short echo if they have different enough latencies. And even if they have the same latency, the double amplification may make the ambient sound louder than you want. And even if you don’t mind that, you can probably save some AirPods battery time by turning off the Transparency mode.

As for the ANC, you’re right that there’s no logic in using ANC while wearing the pods with the aids because the whole point of it is to be able to hear ambient sounds clearly. But if you want the convenience of getting instant quietness momentarily (or even for a longer period) without having to remove the aids, then muting the aids’ mics should do the trick, because there should be enough occlusion with both of them on already. But if still not quiet enough for your taste (like sitting in a very noisy airplane), turning on the pods’ ANC to double up on the quietness may help, too.

The aids’ wires causing a small leak in the pods’ seal is not necessarily bad from the perspective of letting in some of the natural ambient sounds for the ear to hear (where the loss is not bad), and for less occlusion. The integrity and bass of the streaming sound may be affected a little bit due to the leak, but it’d still be a good compromise to accept. You can always adjust the tonal balance of the streamed sound to compensate for this partially.

I’m not sure how streaming sound and stability is improved with this approach. Would you care to elaborate? I’m not sure what stability here refers to. Thanks!

In this instance, the APP2 ear tip is in contact with the RIC wire strain relief. If it was just the wire, it would be a bit more comfortable.

The APP2 latency for Adaptive Transparency seems pretty small, as I didn’t notice echoing in combination with PureSound. That said, I didn’t critically evaluate that aspect.

As for stability, I get some alternative left/right stuttering in MFi media streaming when I’m walking with my iPhone 12 Pro in my pocket.

I have the stock Widex carrying case, but I would like a smaller one that would better fit in my pocket.

It’s SO frustrating that Apple doesn’t let you use Audiogram-based headphone accomodations with an iPad. I have trouble believing this is anything other than a choice they are making to not allocate the resources to program it.

2 Likes

If one looks at this through the lens of their ecosystem and accommodations philosophy, it really doesn’t make sense, especially considering that it shouldn’t take much resource for them to bring iOS features to iPadOS.

It’s doubly frustrating that Audiogram for Transparency has to be turned back on in Settings/secondary menu of Control Center after one has connected the APP2 to a PC or iPad.

Perhaps there’s a dogma at play with respect to Health data privacy and the Audiogram data. The iPhone has a place for that (Health app, data stored locally only); the iPad does not.

1 Like

I know that the version of my iPad doesn’t support the Health app. I thought that somebody on this forum might have said that newer versions of the iPad do support the Health app, but I don’t know if I remember this correctly or not. I may be wrong here. Or the person who said that is wrong.

A quick Google on this seems to confirm that the Apple Health app is not supported on the iPad. It’s only supported on the iPhone, iPod Touch or Apple Watch. I guess Apple deem those more portable devices appropriate to track the personal health data than the iPad which is not portable enough as a health tracking device.

I think Apple’s take that hearing loss is a health issue is not entirely correct. Hearing loss is kind of like a vision issue, you use eye glasses or contact lens to correct your vision, just like you use hearing aids to correct hearing loss (to some degree). It’s usually a static loss in the short term even though it may progress in the long term. So there’s really no need to track this kind of thing in the Health app because it’s not something that fluctuates on a day-to-day or week-to-week basis anyway. So to place the audiogram’s dependence on the Health app is a misplaced and unnecessary limitation.

1 Like

Your comment about hearing loss got my attention.

For me the ability to hear is a life and death need at work and even walking my dog. I use my hearing aids with my iPhone to transmit and receive phone calls. I work in construction.

I believe hearing loss is a health issue.