MQA: Revolutionary British streaming technology
Jan 28, 2017 at 9:18 PM Post #872 of 1,869
 
Don't you keep a list of the 80s-vintage ADCs you thought were horrible and that messed up so many early digital albums?

 
I just avoid that whole era.
 
Jan 29, 2017 at 5:44 AM Post #874 of 1,869
... So then jumping to MQA why the idea its time smearing reduction results in a whole new big step forward in sound quality?  Chances are very few people have the gear, ear, training or listening area to show it at all.  And when it shows it is scarcely better than a coin toss. 

 
For me, the most telling piece of evidence is that when demoing, the MQA people never compare the original track against the MQA encoded track. They only ever play the MQA track.
 
Jan 29, 2017 at 10:51 AM Post #875 of 1,869
 
Hi all, I guess you may be familiar with this AES paper? It's freely available to download

http://www.aes.org/tmpFiles/elib/20170128/18296.pdf

Here is the conclusion:

In summary, these results imply that, though the effect is perhaps small and difficult to detect, the perceived fi- delity of an audio recording and playback chain is affected by operating beyond conventional consumer oriented lev- els. Furthermore, though the causes are still unknown, this perceived effect can be confirmed with a variety of statis- tical approaches and it can be greatly improved through training.

Oh brother. THAT unfortunate paper again. Do you understand what "meta-analysis" is? The conclusions might reflect a massive amount of statistics, but there are several huge problems. Not the least of which is the very first study cited in the paper was done 2 years before the introduction of the CD. Where was the Hi-Res system then? But, most significantly, no study revealed one single individual who could reliably and repeatably identify Hi-res in a comparison. The results showed that over the massed amount of data Hi-res was picked at a rate of 3% better than random guessing. Significant you say? Or was the entire paper biased? 18 papers were selected out of 80 available to compile.
 
Was the author biased? He held these positions:
• Co-Chair of the Audio Engineering Society (AES) Technical Committee on High-Resolution Audio
• General Chair of the 31st AES Conference; New Directions in High Resolution Audio, 2007
 
How do you think he selected the tests to compile? And even if he cherry-picked the ones that kinda-sorta supported the audibility of Hi-res, the best...the absolute BEST he could come up with is 3% better than random guessing, with over 12K tests.  
 
The paper has been largely discredited elsewhere, no need for me to go on. 
So I'll assume that it is possible to improve on redbook for some people some of the time.

And that's the assumption the author, and those supporting and advancing MQA would want you to arrive at. But look carefully...very, very carefully.  Nobody got it right reliably or repeatedly. Nobody. That means everybody got it wrong nearly half of the time, and 47% were as good as guessing or worse.   If the act of incorrectly selecting Hi-res over Redbook would result in your immediate death, you'd stand a much, much better chance of survival playing Russian Roulette (3.5:1) over correctly picking HRA ( 53:47).
 
 And from that we're all supposed to jump on board and assume any version of Hi-res audio is clearly, reliably audibly better to everyone.
 
No, I don't think so.

 
It seems that you require scientific proof absolute for anything to be true (or not).
 
Now that a published meta analysis is held up as such scientific proof the paper is immediately derided and the authors integrity called into question. If this paper was subject to peer review and given the authors position it seems highly unlikely it would be published if it is biased or lacking integrity no? Or maybe you have some data on the peer reviewers?
 
Even if it is a meta analysis, that doesn't make its conclusions unsupported. The author is allowed to choose the data for the meta analysis according to those studies which fit into his protocol is he not? That is the purpose of a meta analysis so that you can use a series of data from different sources.
The data from the stats ( that p value stuff) confirm some of the people can tell a difference between "red book" and "hi res" most of the time (when trained)  
 
So at the very worst you have to conclude more work needs to be done to confirm it one way or the other  and so at best the jury is still out.......
 
 
Ok, I'm going this time and will leave you all to it and get back to actual listening to music in a range of formats and resolutions.
 
Kind regards
 
Jan 29, 2017 at 1:10 PM Post #876 of 1,869
For me, the most telling piece of evidence is that when demoing, the MQA people never compare the original track against the MQA encoded track. They only ever play the MQA track.


Yes that too. If the superior sound was so evident why did they very carefully make the direct comparison unavailable at every public demo. The reverse would have been very convincing if MQA were great.
 
Jan 29, 2017 at 1:47 PM Post #877 of 1,869
   
It seems that you require scientific proof absolute for anything to be true (or not).

If a technology is being advanced by one manufacturer to the point of widespread adoption, yes, scientific proof would be required to justify its use and associated expense. There are far too many non-technical factors in its advancement, and no manufacturer is totally magnanimous. There should be a clear and definite advantage to those paying for the technology (that would be the buyer. If that advantage is minimal, not universally detectable and at best vague, what we have is concept heavily weighted to the manufacturer. The only way to detect either condition is with scientific testing. And we have none of that now.
Now that a published meta analysis is held up as such scientific proof the paper is immediately derided and the authors integrity called into question. If this paper was subject to peer review and given the authors position it seems highly unlikely it would be published if it is biased or lacking integrity no? Or maybe you have some data on the peer reviewers?

If you google around a bit you'll find that this discussion is over 6 months late, and that the paper has been severely criticized already in multiple forums. My objections are hardly new or original, but at least may balance the view.
Even if it is a meta analysis, that doesn't make its conclusions unsupported. The author is allowed to choose the data for the meta analysis according to those studies which fit into his protocol is he not? That is the purpose of a meta analysis so that you can use a series of data from different sources.

When you hand-pick the data you include you bias the result. No meta-analysis would do otherwise, but then when the authors highly public position on the subject is well known we have to declare the entire project as biased.
The data from the stats ( that p value stuff) confirm some of the people can tell a difference between "red book" and "hi res" most of the time (when trained)  

Hardly "most of the time". It's interpretations like that that cause the issues.
So at the very worst you have to conclude more work needs to be done to confirm it one way or the other  and so at best the jury is still out.......

No, the jury hasn't heard any actual evidence yet. They're not out, they're still waiting to hear it. And given the difficulty and expense, we may never get that evidences. The test is hard to do. You need controls everywhere. Test material that has true provenance, both in the original and encoded version. You need precisely matched playback devices that can be synchronised. You need a massive number of testers, many trials, and careful categorizing of data. You'll even need several different playback systems in several different rooms. This is not a small project, and no individual or even small informal group can pull it off. It would take a large organization, like university level, or non-partisan industry association to fund it and do it. We may never get it done. That's why a meta-analysis is attractive, it gets a lot of existing data into an analysis.  But it's hardly definitive. Yet many are hanging their hat on that study as "proof-positive".  It's not.  In the absence of good scientific data, there's no need to cling to a possibly biased meta-analysis with results overing around random guessing as proof-positive.
 
Jan 29, 2017 at 2:12 PM Post #878 of 1,869
  And given the difficulty and expense, we may never get that evidences. The test is hard to do. You need controls everywhere. Test material that has true provenance, both in the original and encoded version. You need precisely matched playback devices that can be synchronised. You need a massive number of testers, many trials, and careful categorizing of data. You'll even need several different playback systems in several different rooms. This is not a small project, and no individual or even small informal group can pull it off. It would take a large organization, like university level, or non-partisan industry association to fund it and do it. We may never get it done.

 
This brings up my benchmark rule for what qualifies as "transformative" in audio:
 
The more difficult the test regime needed to detect an advancement in quality, the less of an advancement it is.
 
Or conversely:
 
The less difficult a test regime needed to detect an advancement in quality, the more of an advancement it is.
 
78 vs 33. Mono vs Stereo. LP to CD.  These were all so transformative we didn't need to conduct blind tests; the differences were obvious to everyone.
 
High resolution audio, MQA or otherwise, is a joke compared to the giants that came before.
 
Jan 30, 2017 at 4:31 AM Post #879 of 1,869
 
   
It seems that you require scientific proof absolute for anything to be true (or not).

If a technology is being advanced by one manufacturer to the point of widespread adoption, yes, scientific proof would be required to justify its use and associated expense. There are far too many non-technical factors in its advancement, and no manufacturer is totally magnanimous. There should be a clear and definite advantage to those paying for the technology (that would be the buyer. If that advantage is minimal, not universally detectable and at best vague, what we have is concept heavily weighted to the manufacturer. The only way to detect either condition is with scientific testing. And we have none of that now.
Now that a published meta analysis is held up as such scientific proof the paper is immediately derided and the authors integrity called into question. If this paper was subject to peer review and given the authors position it seems highly unlikely it would be published if it is biased or lacking integrity no? Or maybe you have some data on the peer reviewers?

If you google around a bit you'll find that this discussion is over 6 months late, and that the paper has been severely criticized already in multiple forums. My objections are hardly new or original, but at least may balance the view.
Even if it is a meta analysis, that doesn't make its conclusions unsupported. The author is allowed to choose the data for the meta analysis according to those studies which fit into his protocol is he not? That is the purpose of a meta analysis so that you can use a series of data from different sources.

When you hand-pick the data you include you bias the result. No meta-analysis would do otherwise, but then when the authors highly public position on the subject is well known we have to declare the entire project as biased.
The data from the stats ( that p value stuff) confirm some of the people can tell a difference between "red book" and "hi res" most of the time (when trained)  

Hardly "most of the time". It's interpretations like that that cause the issues.
So at the very worst you have to conclude more work needs to be done to confirm it one way or the other  and so at best the jury is still out.......

No, the jury hasn't heard any actual evidence yet. They're not out, they're still waiting to hear it. And given the difficulty and expense, we may never get that evidences. The test is hard to do. You need controls everywhere. Test material that has true provenance, both in the original and encoded version. You need precisely matched playback devices that can be synchronised. You need a massive number of testers, many trials, and careful categorizing of data. You'll even need several different playback systems in several different rooms. This is not a small project, and no individual or even small informal group can pull it off. It would take a large organization, like university level, or non-partisan industry association to fund it and do it. We may never get it done. That's why a meta-analysis is attractive, it gets a lot of existing data into an analysis.  But it's hardly definitive. Yet many are hanging their hat on that study as "proof-positive".  It's not.  In the absence of good scientific data, there's no need to cling to a possibly biased meta-analysis with results overing around random guessing as proof-positive.

The author clearly did not "cherry pick" his data but applied a scientific protocol to determine which studies to include that fell within that protocol and which were not. If you understood a meta analysis then you should know this. It is also noted in the reference section.
So the author is biased because he has a high profile position within the Professional Audio community. That's like calling Einstein biased in his papers because he was a famous public figure in the field of Physics. Do you have data to support your allegations of bias?
It's not me saying most of the time, that's what the statistics say. 
 
You can have your own conclusions of course but they don't agree with the data which is where I came in and now am definitely leaving. Please don't reply.Goodbye forever. 
 
Kind regards 
 
Jan 30, 2017 at 6:25 AM Post #880 of 1,869
  [1] It seems that you require scientific proof absolute for anything to be true (or not).
 
[2] Now that a published meta analysis is held up as such scientific proof the paper is immediately derided and the authors integrity called into question. [2a] If this paper was subject to peer review and given the authors position it seems highly unlikely it would be published if it is biased or lacking integrity no?
 
[3] The author is allowed to choose the data for the meta analysis according to those studies which fit into his protocol is he not? ... [3a] The data from the stats ( that p value stuff) confirm some of the people can tell a difference between "red book" and "hi res" most of the time (when trained)  
 
[4] So at the very worst you have to conclude more work needs to be done to confirm it one way or the other  and [4b] so at best the jury is still out.......

 
1. I can't speak specifically for Pinnahertz but generally, "no". Very little in science is supported with absolute proof, science is mostly based on the preponderance of quality evidence. The theory of Evolution, climate change, relativity, quantum mechanics and countless others besides. In fact, we'd have to throw out much/most of science if we required "proof absolute". In practice with audio, it's a case of taking the known evidence (such as the physiology of the human ear for example) and correlating that with practical studies to hopefully result in the best possible quality evidence.
 
2. Yes, it has been held up as "scientific proof" but only by those who don't know what science (or scientific proof) actually is! At best this meta-analysis contributes to the scientific evidence, it's not proof of anything and no real scientist or anyone who understands the process of science would dare claim it as scientific proof, including the author! This statement is true not just of this meta-analysis but of all the published studies it includes (and those it doesn't). For example, the Meyer & Moran study, which is so commonly quoted, likewise does not prove that SACD cannot be distinguished from CD, it's just contributory scientific evidence. What separates the various studies is the quality of the scientific evidence it provides.
2a. No!! Firstly, you appear to misunderstand what "peer review" is, or rather, what it is not. It is not a guarantee of accuracy or lack of bias! Nature magazine puts it well, stating: "Whether there is any such thing as a paper so bad that it cannot be published in any peer reviewed journal is debatable. Nevertheless, scientists understand that peer review per se provides only a minimal assurance of quality, and that the public conception of peer review as a stamp of authentication is far from the truth.". Again, this applies equally to studies which evidence that humans can differentiate hi-res from CD as it does to those which don't. Secondly, the "author's position" has no baring on the matter, even if he were the most respected scientist on the planet he would still have to go through peer review and then scrutiny by the wider expert/scientific readership. This is in fact a fundamental tenet of science, that no one person's reputation, position, integrity and any other potential biases are reduced as much as possible. And lastly, when a study/paper is commissioned/funded by a company with a vested financial interest in a particular conclusion, that automatically calls into question the "integrity" of a conclusion which is favourable to that company and impacts the judgement of that evidence's quality.
 
3. Yes s/he is but the danger of course is in the reverse process occurring. That the protocols/parameters are initially set to include certain specific data and exclude certain other data beneficial to a desired conclusion!
3a. It does NOT "confirm" that at all, all it confirms is that the data examined indicates a statistically significant probability with a low level of confidence.
 
4. No, that cannot work. No amount of work could confirm (prove) that hi-res cannot be differentiated, even if everyone on the planet was tested, that still wouldn't be "proof absolute" because we can't test all those who have lived or will live. In general, science cannot "prove absolute" a negative, that something does not exist. Also relevant here is the burden of proof, it's not up to science to prove that hi-res cannot be differentiated, those claiming it can must prove it with science. Which unlike the impossibility of proving a negative, can be done, we just need two things: Firstly, a correlation with physiology, a means by which human physiology would allow the sensing of the differences and secondly, enough evidence of sufficient quality that there are those who can employ their physiology to accurately differentiate.
4b. No it's not! The first condition has never been met. Even those who claim that differentiation is possible have never demonstrated how it is possible; the differences between hires and 44/16 are outside both the demonstrated limits of human hearing and any human physiological mechanism which could allow for such differentiation. Additionally, the second condition has not been adequately met, which brings us back to the quality of evidence and, two considerations:
 
A. Without the first condition being met we're going to require exceptionally high quality evidence because to accept that evidence we would also have to accept that the science of physiology is wrong or at least, far more incomplete than science currently accepts.
 
B. Compared to the usual quality of evidence (anecdotal, sighted tests), this meta-analysis is far higher quality BUT still quite questionable evidence. I have several problems with it but the most severe is that in effect, it's not a meta-analysis! If we remove just one of the studies (Theiss 1997), we loose our statistically significant probability. Without that one result, the probability falls back to within the statistical limits of random guessing. In other words, the paper's conclusion is effectively based solely on the Theiss study! A cynical person could therefore effectively see this recent paper as nothing more than just a re-hash of the 20 year old Theiss paper, although I don't believe there is any evidence to support that this was in fact the deliberate intention of the author. The Theiss paper (and several other similar papers) has a critical flaw. I myself (along with others) have passed hires (96/24 vs 44/16) blind and double blind tests on numerous occasions, more than a dozen different tests/studies. Good, level balanced, same recording, highend equipment (studio environment) tests/studies! Every single time though it has been proven that it was not actually a comparison between hires and 16/44 but some other factor, a programming issue at 44.1kHz by a DSP processor or IMD. The latter is a particularly common issue, so much so that Sony requires SACD players to implement a LPF at 30kHz (or at most, 50kHz), to help combat the issue downstream. The Theiss study did not consider, test for or eliminate, IMD. Even were this meta-analysis far less questionable, still it would not be sufficient to meet the exceptionally high quality evidence required, it would though suggest the need for further investigation but as it stands, it doesn't!
 
 
78 vs 33. Mono vs Stereo. LP to CD.  These were all so transformative we didn't need to conduct blind tests; the differences were obvious to everyone.

 
We have to be a little careful here, careful we don't fall into the same logical trap as many audiophiles. Although the differences between the formats you listed should be somewhat/very obvious to everyone, we can't rely purely on this latter fact. Even if some difference is within the limits of audibility and apparently obvious to everyone, blind testing can't hurt and provides an additional level of surety/quality to the evidence. After all, the difference between Baa and Faa in the McGurk Effect is also obvious to everyone!
 
G
 
Jan 30, 2017 at 9:57 AM Post #881 of 1,869
   
1. I can't speak specifically for Pinnahertz but generally, "no". Very little in science is supported with absolute proof, science is mostly based on the preponderance of quality evidence. The theory of Evolution, climate change, relativity, quantum mechanics and countless others besides. 

 
Those particular subjects are more obscure than wave physics. Wave theory is very completely defined by math and physics, and as such, finding absolute proof is not out of the question.
 
So to the OP, yes... absolute proof for most audio related things is expected. 
 
Jan 30, 2017 at 3:22 PM Post #882 of 1,869
   
Those particular subjects are more obscure than wave physics. Wave theory is very completely defined by math and physics, and as such, finding absolute proof is not out of the question.
 
So to the OP, yes... absolute proof for most audio related things is expected. 

Audio involves human perception.  Wave theory does not.  Human perception is still under study, and while some characteristics can be described by math, a lot of how sound is perceived has to be defined by complex sets of conditions and variables, range of normal, etc.  Have a look at "Psychoacoustics", perhaps the example of "masking".  
 
There are problems getting anything mathematically absolute in psychoacoustics.  Curves abound, and the general understanding is hardly complete. 
 
I'm on board with the scientific view of the "best possible quality of evidence".  However, if the current "best quality" hovers around the statistical noise floor, that's a bit too far from "absolute" to be even acceptable. 
 
Jan 30, 2017 at 4:51 PM Post #883 of 1,869
Audio involves human perception.  Wave theory does not.  Human perception is still under study, and while some characteristics can be described by math, a lot of how sound is perceived has to be defined by complex sets of conditions and variables, range of normal, etc.  Have a look at "Psychoacoustics", perhaps the example of "masking".  

There are problems getting anything mathematically absolute in psychoacoustics.  Curves abound, and the general understanding is hardly complete. 

I'm on board with the scientific view of the "best possible quality of evidence".  However, if the current "best quality" hovers around the statistical noise floor, that's a bit too far from "absolute" to be even acceptable. 


Is MQA about perceived quality? Or about reconstructing the original recording in the least amount of bandwidth?

One is about psychoacoustics, so I'd agree with you.

The other is an optimization problem, which is just math and engineering.
 
Jan 30, 2017 at 5:23 PM Post #884 of 1,869
Is MQA about perceived quality? Or about reconstructing the original recording in the least amount of bandwidth?

One is about psychoacoustics, so I'd agree with you.

The other is an optimization problem, which is just math and engineering.

If it were only about reconstruction of an original in less bandwidth, we wouldn't be having this discussion.  
 
I really hate to put this in a post here, for so many reasons...but...visit their site, read their stuff, check the press releases.  It's difficult to navigate, and the Blue Smoke is thick.  At the bottom of the How It Works page is a link for "music professionals".   Keep drilling.... eventually you find this:
 
"Unlike analogue transmission, digital is non-degrading. So we don’t have pops and crackles, but we do have another problem – pre- and post-ringing. When a sound is processed back and forth through a digital converter the time resolution is impaired – causing ‘ringing’ before and after the event. This blurs the sound so we can’t tell exactly where it is in 3D space. MQA reduces this ringing by over 10 times compared to a 24/192 recording."
 
See any problems in that?  To get more you have to sit through a video, but...well....you decide if the Blue Smoke is thicker or has cleared.  They claim to un-do "time-smear" in existing recordings too. 
 
Jan 30, 2017 at 5:30 PM Post #885 of 1,869
If it were only about reconstruction of an original in less bandwidth, we wouldn't be having this discussion.  

I really hate to put this in a post here, for so many reasons...but...visit their site, read their stuff, check the press releases.  It's difficult to navigate, and the Blue Smoke is thick.  At the bottom of the How It Works page is a link for "music professionals".   Keep drilling.... eventually you find this:

"Unlike analogue transmission, digital is non-degrading. So we don’t have pops and crackles, but we do have another problem – pre- and post-ringing. When a sound is processed back and forth through a digital converter the time resolution is impaired – causing ‘ringing’ before and after the event. This blurs the sound so we can’t tell exactly where it is in 3D space. MQA reduces this ringing by over 10 times compared to a 24/192 recording."

See any problems in that?  To get more you have to sit through a video, but...well....you decide if the Blue Smoke is thicker or has cleared.  They claim to un-do "time-smear" in existing recordings too. 


They're trying to snag licensing deals... I miss the days when engineering was about solving problems, not just making money.
 

Users who are viewing this thread

Back
Top