The number of test subjects by itself doesn’t mean anything. The amount of scatter (variation, error) in the test group and the control group determines how many test subjects you need to draw a valid conclusion. And the distance between the mean of each group vs. the scatter within each group in outcome.
In the extreme, you could have five subjects in each group (let’s say mice to be nice). You give the drug to five mice. They all die. You give a placebo to five other mice. They all live. There’s a high probability that the drug is highly toxic to mice. You probably don’t need 150 mice in each group to prove that. If, instead, two die in the drug group and one dies in the control group, now you’re in a situation where you need more numbers.
If ReSound has good statisticians, they made the group large enough to get statistically valid results and there should be p value somewhere in their whitepaper. If the p-value were 0.05, that means one time out of 20, the observed results could be due to random fluctuation. If the p-value were 0.01, only one time in 100 could random fluctuation explain the results.
But you don’t just pick a number like 150 and say, “I don’t think that was large enough.” That’s not statistics. It’s just a gut feeling.