A Meta-Analysis of High Resolution Audio Perceptual Evaluation (or How We Learned to Stop Worrying and Love Hi-Res)
Nov 20, 2016 at 12:14 AM Post #18 of 22
http://www.prosoundnetwork.com/business/aes-hi-res-audio-is-perceptible/46281?sf30822636=1

Lets see if we can keep this polite. Even in this end of the forum.

 
 
Several already discounted papers appear in the meta--analysis inc papers from Meridian who are not agenda free, first 4 citations are papers written before High def even existed. Pras is dubious at best, since the successes were failures (more wrong than right by chance but significant using two tailed tests), Yoshikawa does not even test music , kanetada is a preference test not differentiation test and does not mention level matching and uses a supertweeter (known to be problematic due to IMD)  and Reiss has been peddling this (too unlikely to have so many low scores on ABX tests so tests must be flawed) theme on Stereophile for several years. The Hawksford paper results with musical samples are so atypical 16/17 as to be highly dubious
 
Nov 20, 2016 at 12:58 AM Post #19 of 22
   
 
Several already discounted papers appear in the meta--analysis inc papers from Meridian who are not agenda free, first 4 citations are papers written before High def even existed. Pras is dubious at best, since the successes were failures (more wrong than right by chance but significant using two tailed tests), Yoshikawa does not even test music , kanetada is a preference test not differentiation test and does not mention level matching and uses a supertweeter (known to be problematic due to IMD)  and Reiss has been peddling this (too unlikely to have so many low scores on ABX tests so tests must be flawed) theme on Stereophile for several years. The Hawksford paper results with musical samples are so atypical 16/17 as to be highly dubious

 
I'd suggest that if a meta-analysis, dubious or not, is the best source that attempts to verify an audible difference exists, save your money folks.  There, my conscience is clear.
 
Nov 20, 2016 at 8:36 AM Post #20 of 22
AES Paper: Hires is Audible
 
Unfortunately, there is not currently nor is there likely to be in the foreseeable future definitive proof that hires is audible. Scientists are subject to corruption just as much as any other human beings. Over the years many scientists have published papers which conclude that, for example; Smoking is not injurious to health, lead in gasoline is not harmful, man made climate change does not exist, etc. The consumer audio equipment and content industry has an existential interest in hires and will commission papers to refute evidence to the contrary and even accepted fact in order to maintain their existence, just as the tobacco, fossil fuel and other industries have.
 
Unfortunately, there is an added difficulty with hires; it's not such an obviously black and white issue as say smoking tobacco, which has always been injurious to health regardless of any marketing or even evidence to the contrary. Hires is/can be patently obvious! An example: The sound of an electric guitar is partially dependent on IMD, tones/frequencies generated by a guitar amplifier in the audible band from ultrasonic frequencies. We don't need to actually record these ultrasonic freqs, just the audible band IMD product (tones) which of course can be captured with a mic at 44.1kHz. However, there are a number of software guitar amps in common use which model the response of actual guitar amps and to do so accurately requires those ultrasonic freqs. In this case, a sample rate of say 96kHz is obviously distinguishable from a 44.1kHz sample rate as 44.1kHz cannot process ultrasonic freqs and will not contain those resultant essential tones in the audible band. Other examples include modelled compressors and limiters and some soft-synths, although in the latter case often because of programming priorities rather than ultrasonic IMD components and programming priorities is also responsible for differences in other areas of DSP, such as filters for example. Generally, all this adds up to relatively little (though potentially audible) difference as far as consumers are concerned but for those actually creating content the differences are more obvious because we can easily compare the performance of various digital processors at different sample rates and these days that means not just an elite few but a significant number of the general population.
 
Having said all this, it would appear that there could indeed be a discernible difference, however, the above describes the state of play a decade or so ago. Since then computing power has increased and DSP programming has advanced. It's now practical and indeed common/usual for DSP plugins to locally up-sample when there is need (with modelling plugins for example) and so the potential sound quality need to run sessions at 96kHz rather than 44.1kHz no longer exists and even the creators can no longer tell a difference.
 
G
 
Nov 20, 2016 at 12:45 PM Post #22 of 22
I moved the new topic here as it's the same subject. (is the paper still freely available somewhere? that would help those who haven't seen it)
as in any meta analysis, the choice of the studies to include is of major significance and hardly controlled in a purely objective manner. Reiss himself warned about it. between the studies that have been doubted, the different test methods and test objectives, the number of participants etc etc, it's a giant mess. can we give the same value to each individual participant when even the sound being tested was different?
and given how close to guessing all the results are anyway, even when involving some studies where there was an obvious agenda IMO, I have a hard time finding significance. just because it's a well done statistical job, doesn't remove the uncertainties from picking and mixing studies that are just too different in nature and not unanimously approved.
 

Users who are viewing this thread

Back
Top