I really don’t think Oticon uses any human labelers to determine which sound is more important than which. That would be a futile exercise in subjectivity in the first place anyway. For somebody who’s out on the street walking, maybe the sound of approaching cars is much more important for their safety. For another person nearby sitting at a bus stop waiting for a bus, they’d probably rather not hear approaching cars and consider it noise, at least while they’re sitting at the bus stop being glued to their phone doing whatever with it. But while they were walking to the bus stop just minutes before, the same approaching cars noise was probably very important for them to hear.
That is why Oticon attaches the brain hearing concept to go hand in hand with the open paradigm, so that it’s up to the HA wearer to use their own brain hearing to decide which sound is more important than which at any given time so they can focus or ignore. The only universal acceptance is that speech is more important than other sounds, well, at least most of the times.
So as far as I can tell, the Oticon DNN whitepaper never mentions any training data labelers or biases to non-speech sounds. So there’s no worry about whether sounds get labelled with the wrong biases, except for a bias for labelling speech. And even then, the user is given full control of how they want to label their own level bias toward speech via the controls given to them in Genie 2.