I would suggest that it is not quite as fine-grained as that.
Some patients with hearing loss still have normal scores on the QuickSIN, which allows you to say “you don’t really need much help in noise, so feel free to get this cheaper hearing aid”. Re-testing after the fitting doesn’t add much value because of the floor effect; their scores are already at the bottom (low is good on the QuickSIN) so you won’t improvement from there. Others do so poorly on the QuickSIN under headphones that you know in advance that even the most expensive hearing aid is not going to be enough to support their hearing in noise and you need to talk about remote mics and realistic expectations, etc… Both of those situations are useful information for fitting decisions.
Some people score sort of in the middle, but Spud isn’t wrong–the test is a little bit too artificial to track precisely onto real-world situations. Under headphones, the QuickSIN does NOT provide the cues we use for spatial hearing and so does not replicate that sound scape that Spud is imagining. It’s also variable enough that it’s hard to say “Okay, you need exactly a 6 dB boost in all situations, go with the More 2” (if we even trusted the manufacturer’s assessments of the real world SNR improvement of their devices, which we do not). For aided testing, you can run it with spatial separation in the soundfield, but you’re still only typically looking at one babble source and one target speech source (it’s common for audiologists to have a two speaker soundfield but usually only researchers have a calibrated array $$$$$$). And that two equi-distant speech-source set-up might not be a great test of the particular hearing aid’s function. The Opn/More strategy, for example, could be expected to detect a near-field speech source from the rear (if you’re running front to back) and turn off, which is not representative of its real-world function. So . . . I also agree with Spud’s instinct that when a test being used to validate hearing aid function doesn’t have a lot of construct validity it starts to feel more like a marketting gimmick. But as Louie said, that’s not how they are using it.
So in sum, results at either end of the QuickSIN give us very clear information and results in the middle are not as clear. (Similar to the WRS, come to think of it, which is also less varible for edge results than middle results.) But still a very useful test and I’m surprised people are saying they’ve never had it done because I thought it was pretty standard. Where I am, it’s standard practice even in Costco, but I’ve also said before that I think Canadian Costco is better than American Costco when it comes to standard of hearing care.