KeithEmo
Member of the Trade: Emotiva
- Joined
- Aug 13, 2014
- Posts
- 1,698
- Likes
- 868
All this discussion on ABX, Moran and Myer, later experiments etc is all good and well, but I can't get my head around the first order givens and hence why the burden of proof should not fall squarely on those making claims that high res audio sounds better.
If 24bit compared to 16bit only offers an increase in dynamic range which is not of any practical use, and lowers the noise floor from an already very low and masked by any music content, why should there be any theoretical advantage in reproducing sound?
Secondly, a 100years of testing have proved that the hearing range of humans is within a 20hz to 20khz band. So why should there be any theoretical advantage in extending the range for music playback? I understand the arguments, none of them convincing or proved since digital oversampling, that a steep cut-off may result in errors below the threshold. Surely if that ever was an issue, that would have been resolved in the early 80s? I know of no peer review standard papers proving otherwise?
Lastly, if we look at progress in hi res video technology, eg OLED displays etc, the high res bit depth here makes a difference due to smaller pixellation, hence sharper pictures. This is not the same as audio where higher bit depth does not increase the accuracy of the sound wave in the band limited frequency.
As an analogy with 44.1 vs 96 or 124 debate, what would everyone think if TV manufacturers started marketing sets claiming higher quality pictures becuase they increased the frequency bandwidth deep into infra red or ultra violet? It appears only in audiophile land that people think such claims are credible.
But your analogies aren't actually correct.
When HD TVs (1920 x 1080) first appeared, it was NOT universally agreed that the extra resolution actually made a significant difference for most customers. Many people in fact argued that there was very little HD content available, and that using "full 1080p HD resolution" on a screen smaller than 30" was a total waste anyway - because nobody could see the difference between 720p and 1080p on a screen that small. However, today, almost every TV of any size is full 1080p HD, and we're having the same argument about 4k.
The problem with your argument is that the basic premise is limited. Yes, if there was absolute reliable proof that frequency response above 20 kHz absolutely, positively produces no audible difference, then it would be unnecessary (although I'm still not convinced that having a "safety margin" above the bare minimum isn't still a good idea). However, the proof you're offering isn't at all "absolute" or "conclusive". In fact, most of those tests were conducted with inadequately sized sample groups, using obsolete equipment, and frequently conducted using dubious test methodology. The fact that twenty or thirty people, using 1980's vintage technology, and 1980's vintage recordings is NOT compelling proof that the difference doesn't exist - at least not to me. And, if we were in fact to prove, with properly sized and run tests, that the difference wasn't audible with the best equipment available today, that wouldn't constitute evidence about whether there might be a difference that is audible with the equipment available in twenty years. I simply don't believe that we actually understand 100.0% of how human hearing works; especially since human hearing takes place partly in the brain - and we certainly don't understand anywhere near 100% of how THAT works.)
(The reality is that there have been several tests run in recent times which tend to suggest that frequency response above 20 kHz can in fact produce audible effects - in different ways and with different implications. The recent AES paper seems to show that a small sample of individuals was able to "beat the odds" in terms of telling whether a given sample was high resolution or not. Another test I recall reading about produced a result that demonstrated that, while the participants didn't hear what they considered to be an audible difference with band-limited content, the location of instruments in the sound stage was perceived as being shifted with the band-limited version, which is in fact "an audible effect". Note that I don't consider either of those results to be "compelling" either but, when balanced against tests run decades ago, with the audio equipment then current, I think they raise enough questions to make it unreasonable to "fall back" on those outdated results as being "absolute facts" without confirmation.)