At the end of the day, I was never trying to “prove” that they have different hardware, so I never needed to show any evidence for it in the first place. And I never try to prove it either way because I my opinion, having the same hardware alone doesn’t really prove that they’re identical in performance, so it’s a moot point to make. But I do have a personal opinion (which doesn’t need proof, just needing my own personal common sense deduction) that they use different “core” hardware.
Like you said, none of these companies share much real technical data, so nobody has any “evidence” that their digital processing algorithm is different or the same between brands, either. Any tidbit information there is is only revealed through their whitepapers. And the Philips and Oticon whitepapers describe the development of their AI implementation differently, and that’s all we have to share and deduce from.
There are many technologies used by the Demant brand that obviously seem to be shared, like frequency lowering, feedback prevention, sudden sound and wind handling, etc. Nobody is arguing that these are not shared because it’s pretty obvious that they’re shared, like you said, because their descriptions, although despite using different names to call them, are quite obviously the same. Another way to confirm this is through their programming software. I can go into the Oticon Genie 2 software and the Philips HearSuite software and look at these peripheral technologies and verify that their use the exact same settings quite easily as well.
The only argument I make is that just because they share these peripheral technologies doesn’t automatically imply that their core technology (the AI engine) is the same. At least from the only descriptions they have in the whitepapers about their core AI technologies, they don’t look obviously the same. Another telltale sign is that inside of the Oticon Genie 2 and the Philips HearSuite software, the programming options for their core engines (The Oticon RealSound Intelligence and the Philips SoundMap Noise Control) don’t look anything remotely the same.
The fact that even if they share the same hardware or not doesn’t change the fact that THEIR descriptions of the AI core technology in their whitepapers don’t sound the same. One discusses using hundreds of thousands of noisy speech samples to train their AI. The other discusses using 2 million sound scenes to train their AI.