How about ReSound? You have quite a few other brands that rank lower in sales.
Rather than having to dig through the white paper, it would be interesting to know in summary how the various recordings are scored since that seems subjective. Are there multiple scorers? It also seems if multiple evaluations are done in a row, scoring could be affected by “scorer fatigue,” or even the converse. For example, if one listened to a string of particular bad recordings for several different brands, suddenly hearing recordings from better than average HA’s might cause the scorer(s) to give the better-than-average HA higher marks that it might otherwise obtain if it were reviewed amongst a string of other better-than-average HA’s.
Edit_Update: I see in the YouTube video in the above post that scoring is done according to the following methodology (see transcript quote below), which does not say whether scoring is totally automated or whether any subjective human judgement comes into play - perhaps in how the “leveraging” or “averaging” comes into play?
3:06
… We use metrics from the hearing science literature that leverage models of
3:11
sensory neural hearing loss to predict the speech intelligibility of each recording. We average
3:16
those metrics separately across quiet and loud sound scenes and transform them into our speech
3:22
perception benefit metrics. You can explore our content today on hearingtracker.com where
3:28
you can audition hearing aids and compare how much they improve speech perception.
Perhaps in critiquing the scoring methodology, one might go to the other extreme and say that actually no human judgement of actual speech quality was involved, only a machine-based set of artificial algorithms