In another thread, I mentioned that courses related to music and HA’s are available to audit for free at Audiology Online - you don’t have to be a hearing professional to sign up. The specific course that I referred to, supposedly comparing HA brand ability to reproduce music, is very limited in scope and (naturally) touts the ability of Signia HA’s to handle music as compared to other brands as the authors represent that company. But the text summary of the course has some interesting quotes concerning HA’s and music and one graph that I copy at the end that “says it all.”
We fit hearing aids to improve audibility of sounds, such as speech and music. However, hearing aids are often programmed for listening to speech – and we should remind ourselves that speech and music can be quite different. Speech is produced by the human vocal tract and has well-defined acoustic properties that are fairly consistent across languages (Byrne et al., 1994) and that have predictable variations across genders, ages, and vocal effort levels (Byrne et al.,1994; Olsen, 1998). Average speech levels typically vary between 55 and 66 dBA (Olsen, 1998) with a dynamic range of 20 – 30 dB (Holube, Fredelake, Vlaming, & Kollmeier, 2010). Speech tends to have more low-frequency energy and a fundamental frequency as low as 100 Hz and 160 Hz for males and females, respectively (Cornelisse, Gagne, & Seewald, 1991).
Music, in contrast, can originate from a variety of sources, such as voices and instruments. Music has the potential of having a much larger dynamic range, broader frequency spectrum, and higher overall level (Chasin & Hockley, 2014). We can illustrate these differences between speech and music using displays of the energy, across frequencies, in each type of signal across frequency. The speech range is sometimes called a “speech banana”, as shown in Figure 1. This is compared to the range of energy in music, with both speech and music overlaid on the dynamic range of the human auditory system. The acoustic differences between speech and music are large, and may pose challenges for designing hearing aid programs that work as well for music as they do for speech.
and:
Many hearing aid manufacturers have incorporated music programs, designed to improve the sound quality of music, into their products. While the parameters of each music program differ between manufacturers, common features of a music program include slower compression, less noise reduction, reduced directionality, and reduced feedback cancellation, compared to programs intended for use with speech (Moore, 2016). At least one study has shown that different models provide different output levels for music, but also showed that individual preferred listening levels vary considerably across listeners (Croghan, Swanberg, Anderson, & Arehart, 2016; Galster, Rodemerk, & Fitz, 2014).
and finally:
Does genre matter? Our results show that sound quality differences between hearing aids may be more apparent for some genres of music compared to other genres. This finding is consistent with previous results from simulated hearing aids tested by Arehart et al., (2011). Clinically, this may mean that the most suitable hearing aid, and whether it needs or has a music program, may depend on the type of music that the patient would like to enjoy. In this study, the Primax was rated, in all genres, as having the highest sound quality more frequently than any other hearing aid. Some genres seemed to interact more often with hearing aid model and program. For example, there were only a few noticeable differences for classical music, while the jazz sample elicited many noticeable differences. The individual patient’s preferences for music type is an important consideration.
and the graph comparing music and speech listening volumes and frequency ranges:
Figure 1. Frequency-intensity range of speech and music within the audibility of the human auditory system. Adapted from Limb (2011).