Well, I got my Airpods Pro 2 today, and I had a chance to test it tonight. I’ll share my findings below. But if you don’t care to read the very long post, I’ll just summarize it right here by saying that I didn’t find the Airpods Pro 2 to be “hearing-aided” device that picks up environmental sounds. I only find it to be a “streaming-aided” device, and it’s only limited to specific Apple devices where you have an iOS version that can support loading up your audiogram or customization based on your hearing loss, and where you have done the customization of your hearing loss for that device only. It doesn’t share it with other Apple devices automatically.
OK, so here goes the long post:
First off, I only have an iPhone 7 Plus, and it only goes up to iOS 15.7.1 and not iOS 16. Only iPhone 8 and up can get iOS 16 support from Apple. So right off the bat, when my iPhone 7+ detected the Airpods Pro 2, it complained that my iPhone does not have the latest iOS and therefore I’d only get limited functionality from the Airpods Pro 2.
I was bummed, but I have an iPad and found that I can update it to iOS 16.1.1. So I did, but when I tried to upload my audiogram on it via the Headphone Accomodations → Custom Audio Setup on my iPad, I was not given a choice to load my audiogram as expected per the online instructions from various websites/YouTube videos. Out of desperation, I went back to my iPhone 7+ to connect to the Airpods Pro 2, and lo and behold, it connected without the complain that my iPhone 7+ doesn’t have the necessary iOS version (16) for full Airpods Pro 2 functionalities like it did before. I don’t know why, but maybe the initial connection with the iOS 16.1.1 on my iPad did some magic to keep the Airpods Pro 2 happy now, even though my iPhone 7+ is still only on iOS 15.7.1.
It still doesn’t make sense to me because I don’t think that an iOS device can “write” something into the Airpods Pro 2. It’d make more sense that the Airpods Pro 2 does the query when it tries to pair up with an iOS device to see which version the device has to determine how well it’s supported. Or maybe the iOS device scans and sees the Airpods Pro 2 and determines that it doesn’t necessarily have the latest iOS version to support the Airpods Pro 2 and sends out the warning. But I’ll take a small lucky victory on this and move on, even though it doesn’t make any sense.
Anyway, so moving ahead, I was surprised to be able to find the option to upload my audiogram on my 15.7.1 iPhone 7+, to my chagrin, even though my iOS 16.1.1 iPad doesn’t allow this option. How ironic is that??? Anyway, I tried both loading options of either taking a photo of my professional audiogram, or using a saved pic of it, and the loading software couldn’t read much any of the info on it at all. So much for claiming that it can intelligently decipher any professionally made audiogram from just a picture or a file → BS. I thought I was thoroughly impressed when I heard that it’d be smart enough to scan any audiogram pic → should have known better…, But at least it allowed me to fix the missing data, which was virtually almost all of the data points, by hand, then save it. By the way, the available data points on my hand-input audiogram is pretty bare, at 125 Hz, 250, 500, 1K, 2K, 4K and 8K. I’m not sure if you can add a 3K data point even if you have the data, @a13z , because 3K is not an option in the data point set I saw.
So now with an audiogram loaded, I was able to verify that the Audio & Visual ->Headphone Accommodations → Tune Audio For → Audiogram. I tried playing a YouTube song and was able to hear the amplified highs. It was nowhere near as good as my OPN’s amplified highs, but I guess it was better than nothing. BUT, that’s ONLY when I stream from my iPhone 7+.
If I stream the same YouTube song from my iPad, the music was unaided (no amplified highs). If I went through the Custom Audio Setup in my iPad to confirm a few choices of what voice or music settings I can hear well enough, then I can get either Balanced Tone or Vocal Range or Brightness selected in my Tune Audio For window. But still NO audiogram in my iOS 16.1.1 iPad because there’s never an option available for it there. So the moral of the story here is that even if you’re able to get Tune Audio For to support your audiogram, it’s ONLY LIMITED to that iOS device only. If you have multiple iOS devices, the audiogram has to be loaded into each of those devices separately so that that particular device can use the audiogram to Tune your Audio For. In my case, my iOS 16.1.1 iPad does not have a load audiogram option, so I’m SOL with my iPad, and the stream audio from the iPad remains “unaided” with no audiogram.
By now, it’s obvious to say that if you want to use your Airpods Pro 2 on any other non-iOS devices (like Android phones or laptop or BT-supported TV, etc), you will NOT get the audiogram-aided sound, because these non-iOS devices don’t have the Apple iOS capability to load your audiogram like in the iOS. For good measures, I also tried to connect it to my Windows 10 laptop and listen to the same YouTube test song and verified that I got the unaided sound.
Now moving on to the Transparency mode. Because this and the Noise Cancellation is built into the H2 chip that’s onboard the Airpods Pro 2 ear pieces, you CAN use these features without needing an iOS device of yours to be operating anywhere nearby. HOWEVER, whatever you hear from these 2 features will be unaided sounds as picked up by the Airpods Pro 2 mics. This makes the Airpods Pro 2 NO GOOD really as a hearing device.
In summary, the Airpods Pro 2 is NOT a hearing device in my opinion. It’s only an audiogram aided BT device ONLY for streaming sounds from an iOS device that supports the loading of an audiogram. My iOS 16.1.1 iPad does not support this. I don’t have a Mac to test on so I’m not sure if MacOS support loading of an audiogram or not. I hear that it’s supposed to but I’m not sure.
For the Airpods Pro 2 to really be a bona-fide hearing device, not matter how crude it is, the audiogram aided information must be designed to be uploaded into the ear pieces themselves in order to make the aided amplification device independent. Otherwise, the software approach that Apple takes to Tune your Audio For your audiogram ONLY from the streaming device itself makes the solution totally become device dependent.
And even if you ignore the device-dependent aided streaming and will accept it as a “hearing” device as long as the transparency mode is aided so you can get amplified-aids from the environmental sounds, it doesn’t even do that. So for all those hearing care providers like Dr. Cliff or all the other audis who made YouTube videos touting that the Airpods Pro can be used as a “hearing” device by mild to moderate-loss users, they’re misleading people. The Airpods Pro is NOT a “hearing” aid, it’s a iOS device-by-iOS device dependent “streaming” aid. Only the streaming content of the qualified iOS device (and maybe also MacOS device) gets aided. The environmental sounds in the Transparency mode NEVER EVER get “aided”.
I’d love to be proven wrong with this conclusion I made. If anyone can show me that I’m wrong and show me how to get aided (loss-amplfied) environmental sounds at least via the transparency mode, please share with us here. This is one of the very few times that I’d love to be wrong… Otherwise, I’ll probably return my Airpods Pro 2 in a few weeks.