pinnahertz
Headphoneus Supremus
- Joined
- Mar 11, 2016
- Posts
- 2,072
- Likes
- 739
Okay I'll write in everything needed to straighten you out just below this sentence.
......
Thanks. I've found enlightenment.
Okay I'll write in everything needed to straighten you out just below this sentence.
......
Don't you keep a list of the 80s-vintage ADCs you thought were horrible and that messed up so many early digital albums?
... So then jumping to MQA why the idea its time smearing reduction results in a whole new big step forward in sound quality? Chances are very few people have the gear, ear, training or listening area to show it at all. And when it shows it is scarcely better than a coin toss.
Hi all, I guess you may be familiar with this AES paper? It's freely available to download
http://www.aes.org/tmpFiles/elib/20170128/18296.pdf
Here is the conclusion:
In summary, these results imply that, though the effect is perhaps small and difficult to detect, the perceived fi- delity of an audio recording and playback chain is affected by operating beyond conventional consumer oriented lev- els. Furthermore, though the causes are still unknown, this perceived effect can be confirmed with a variety of statis- tical approaches and it can be greatly improved through training.
Oh brother. THAT unfortunate paper again. Do you understand what "meta-analysis" is? The conclusions might reflect a massive amount of statistics, but there are several huge problems. Not the least of which is the very first study cited in the paper was done 2 years before the introduction of the CD. Where was the Hi-Res system then? But, most significantly, no study revealed one single individual who could reliably and repeatably identify Hi-res in a comparison. The results showed that over the massed amount of data Hi-res was picked at a rate of 3% better than random guessing. Significant you say? Or was the entire paper biased? 18 papers were selected out of 80 available to compile.
Was the author biased? He held these positions:
• Co-Chair of the Audio Engineering Society (AES) Technical Committee on High-Resolution Audio
• General Chair of the 31st AES Conference; New Directions in High Resolution Audio, 2007
How do you think he selected the tests to compile? And even if he cherry-picked the ones that kinda-sorta supported the audibility of Hi-res, the best...the absolute BEST he could come up with is 3% better than random guessing, with over 12K tests.
The paper has been largely discredited elsewhere, no need for me to go on.
So I'll assume that it is possible to improve on redbook for some people some of the time.
And that's the assumption the author, and those supporting and advancing MQA would want you to arrive at. But look carefully...very, very carefully. Nobody got it right reliably or repeatedly. Nobody. That means everybody got it wrong nearly half of the time, and 47% were as good as guessing or worse. If the act of incorrectly selecting Hi-res over Redbook would result in your immediate death, you'd stand a much, much better chance of survival playing Russian Roulette (3.5:1) over correctly picking HRA ( 53:47).
And from that we're all supposed to jump on board and assume any version of Hi-res audio is clearly, reliably audibly better to everyone.
No, I don't think so.
For me, the most telling piece of evidence is that when demoing, the MQA people never compare the original track against the MQA encoded track. They only ever play the MQA track.
It seems that you require scientific proof absolute for anything to be true (or not).
Now that a published meta analysis is held up as such scientific proof the paper is immediately derided and the authors integrity called into question. If this paper was subject to peer review and given the authors position it seems highly unlikely it would be published if it is biased or lacking integrity no? Or maybe you have some data on the peer reviewers?
Even if it is a meta analysis, that doesn't make its conclusions unsupported. The author is allowed to choose the data for the meta analysis according to those studies which fit into his protocol is he not? That is the purpose of a meta analysis so that you can use a series of data from different sources.
The data from the stats ( that p value stuff) confirm some of the people can tell a difference between "red book" and "hi res" most of the time (when trained)
So at the very worst you have to conclude more work needs to be done to confirm it one way or the other and so at best the jury is still out.......
And given the difficulty and expense, we may never get that evidences. The test is hard to do. You need controls everywhere. Test material that has true provenance, both in the original and encoded version. You need precisely matched playback devices that can be synchronised. You need a massive number of testers, many trials, and careful categorizing of data. You'll even need several different playback systems in several different rooms. This is not a small project, and no individual or even small informal group can pull it off. It would take a large organization, like university level, or non-partisan industry association to fund it and do it. We may never get it done.
It seems that you require scientific proof absolute for anything to be true (or not).
If a technology is being advanced by one manufacturer to the point of widespread adoption, yes, scientific proof would be required to justify its use and associated expense. There are far too many non-technical factors in its advancement, and no manufacturer is totally magnanimous. There should be a clear and definite advantage to those paying for the technology (that would be the buyer. If that advantage is minimal, not universally detectable and at best vague, what we have is concept heavily weighted to the manufacturer. The only way to detect either condition is with scientific testing. And we have none of that now.
Now that a published meta analysis is held up as such scientific proof the paper is immediately derided and the authors integrity called into question. If this paper was subject to peer review and given the authors position it seems highly unlikely it would be published if it is biased or lacking integrity no? Or maybe you have some data on the peer reviewers?
If you google around a bit you'll find that this discussion is over 6 months late, and that the paper has been severely criticized already in multiple forums. My objections are hardly new or original, but at least may balance the view.
Even if it is a meta analysis, that doesn't make its conclusions unsupported. The author is allowed to choose the data for the meta analysis according to those studies which fit into his protocol is he not? That is the purpose of a meta analysis so that you can use a series of data from different sources.
When you hand-pick the data you include you bias the result. No meta-analysis would do otherwise, but then when the authors highly public position on the subject is well known we have to declare the entire project as biased.
The data from the stats ( that p value stuff) confirm some of the people can tell a difference between "red book" and "hi res" most of the time (when trained)
Hardly "most of the time". It's interpretations like that that cause the issues.
So at the very worst you have to conclude more work needs to be done to confirm it one way or the other and so at best the jury is still out.......
No, the jury hasn't heard any actual evidence yet. They're not out, they're still waiting to hear it. And given the difficulty and expense, we may never get that evidences. The test is hard to do. You need controls everywhere. Test material that has true provenance, both in the original and encoded version. You need precisely matched playback devices that can be synchronised. You need a massive number of testers, many trials, and careful categorizing of data. You'll even need several different playback systems in several different rooms. This is not a small project, and no individual or even small informal group can pull it off. It would take a large organization, like university level, or non-partisan industry association to fund it and do it. We may never get it done. That's why a meta-analysis is attractive, it gets a lot of existing data into an analysis. But it's hardly definitive. Yet many are hanging their hat on that study as "proof-positive". It's not. In the absence of good scientific data, there's no need to cling to a possibly biased meta-analysis with results overing around random guessing as proof-positive.
[1] It seems that you require scientific proof absolute for anything to be true (or not).
[2] Now that a published meta analysis is held up as such scientific proof the paper is immediately derided and the authors integrity called into question. [2a] If this paper was subject to peer review and given the authors position it seems highly unlikely it would be published if it is biased or lacking integrity no?
[3] The author is allowed to choose the data for the meta analysis according to those studies which fit into his protocol is he not? ... [3a] The data from the stats ( that p value stuff) confirm some of the people can tell a difference between "red book" and "hi res" most of the time (when trained)
[4] So at the very worst you have to conclude more work needs to be done to confirm it one way or the other and [4b] so at best the jury is still out.......
78 vs 33. Mono vs Stereo. LP to CD. These were all so transformative we didn't need to conduct blind tests; the differences were obvious to everyone.
1. I can't speak specifically for Pinnahertz but generally, "no". Very little in science is supported with absolute proof, science is mostly based on the preponderance of quality evidence. The theory of Evolution, climate change, relativity, quantum mechanics and countless others besides.
Those particular subjects are more obscure than wave physics. Wave theory is very completely defined by math and physics, and as such, finding absolute proof is not out of the question.
So to the OP, yes... absolute proof for most audio related things is expected.
Audio involves human perception. Wave theory does not. Human perception is still under study, and while some characteristics can be described by math, a lot of how sound is perceived has to be defined by complex sets of conditions and variables, range of normal, etc. Have a look at "Psychoacoustics", perhaps the example of "masking".
There are problems getting anything mathematically absolute in psychoacoustics. Curves abound, and the general understanding is hardly complete.
I'm on board with the scientific view of the "best possible quality of evidence". However, if the current "best quality" hovers around the statistical noise floor, that's a bit too far from "absolute" to be even acceptable.
Is MQA about perceived quality? Or about reconstructing the original recording in the least amount of bandwidth?
One is about psychoacoustics, so I'd agree with you.
The other is an optimization problem, which is just math and engineering.
If it were only about reconstruction of an original in less bandwidth, we wouldn't be having this discussion.
I really hate to put this in a post here, for so many reasons...but...visit their site, read their stuff, check the press releases. It's difficult to navigate, and the Blue Smoke is thick. At the bottom of the How It Works page is a link for "music professionals". Keep drilling.... eventually you find this:
"Unlike analogue transmission, digital is non-degrading. So we don’t have pops and crackles, but we do have another problem – pre- and post-ringing. When a sound is processed back and forth through a digital converter the time resolution is impaired – causing ‘ringing’ before and after the event. This blurs the sound so we can’t tell exactly where it is in 3D space. MQA reduces this ringing by over 10 times compared to a 24/192 recording."
See any problems in that? To get more you have to sit through a video, but...well....you decide if the Blue Smoke is thicker or has cleared. They claim to un-do "time-smear" in existing recordings too.