I don’t want to tell you that you are wrong here, but that statement above is clearly flawed as you’d understand if you had any idea of the design ethos of all the major hearing companies.
To deal with your second point illustrates why you made the first statement. If I was to put a pair on a person with normal hearing, they’s actually say how great the sound was. Everything would be clearer, louder and apart from the risk of fatiguing the system, they would feel like they have ‘super hearing’. This person has a NORMAL dynamic range of around 120dB, so loudness growth would be normal.
If I was to put the same hearing aids on someone with a 50-60dB sensorineural loss they would note an improvement, BUT I COULDN’T give them 120dB dynamic range as the aids would theoretically top out at 180dB - Causing them to die or at least total destruction of the area in the vicinity of their ears. Practically, the aids would top out at 143dB which is still FAR too loud for such a moderate loss, so we have a ceiling of maybe 105/110dB - effective dynamic range of 60dB Max or HALF of what is available to your unimpaired listener.
Music is a strawman - it’s not speech - the principal design consideration in hearing aids is to amplify speech better, more recently speech in background noise has become the holy-grail. If your hearing loss and hearing aid design includes an automatic music program then great, but the devices aren’t bionic and able to know what you want to hear: they are making the best guess of that every few microseconds. Sometimes they get that wrong.
Smartphone integration is a very small part (though growing) of the functionality of modern aids - it’s used in the sales pitches as it’s something people can relate to, however it appears that the American based manufacturers have spent lots of R+D time and effort on it, while the European ones seem to be more concerned with higher bit-rates and sound quality in their product.