MQA vs Hi Res
May 31, 2018 at 4:58 AM Post #32 of 51
but from what i see, saying "loseless" is just a marketing.

In many respects, MQA has employed some of the most sophisticated marketing I've ever seen for an audiophile product/format. So it's hardly surprising some have been caught out by it.

[1] Actually im a fan of DSD format, which for me sounds really analog and natural (like it better than hi-res, for me is just a feeling of naturalness and fullness). ...
[2] Thats for sure, but i bet that when u listen to the DSD, u can feel something more, like bit more of richness. Or its just my brain.

1. Again, that's just marketing but along with the marketing there has been what one could call "cheating". There are quite a few examples of two masters being made, one which would sound better on a higher end system than the other. The better version is then put on a SACD and the worse version on the CD, which effectively demonstrates that SACD is "better" or it would if it wasn't for the fact you could do it the other way around, put the better version on a CD and demonstrate that CD is "better" than SACD.

2. It could be either. Either just your brain or in some cases there is actually a different "better" master put on DSD version.

As we've discussed before: Expectation bias can work both ways. You won't hear a difference if you don't want to.

I do have an expectation bias of not hearing a difference. That expectation bias was caused by not hearing a difference in double blind testing! :)

G
 
May 31, 2018 at 5:11 AM Post #33 of 51
I do have an expectation bias of not hearing a difference. That expectation bias was caused by not hearing a difference in double blind testing! :)

G

But not hearing a difference in double blind listening tests can be caused by an expectation bias that no difference will be heard.

Just as someone who is determined to hear a difference, will hear one, or convince themselves they do, so the opposite is true. While ABX double blind tests will remove the expectation bias of those who want to hear a difference, it cannot remove the expectation bias of those who are determined not to.
 
May 31, 2018 at 6:39 AM Post #34 of 51
But not hearing a difference in double blind listening tests can be caused by an expectation bias that no difference will be heard.

That's what I'm saying. I've failed to hear a difference in a double test, which could be due to an expectation of not hearing a difference, an expectation caused by some previous double blind test which demonstrated no difference.

In practice of course, with a double blind test we are actively trying to hear a difference, if we weren't trying to hear a difference what would be the point of running a differential test in the first place?

G
 
May 31, 2018 at 7:02 AM Post #35 of 51
That's what I'm saying. I've failed to hear a difference in a double test, which could be due to an expectation of not hearing a difference, an expectation caused by some previous double blind test which demonstrated no difference.

In practice of course, with a double blind test we are actively trying to hear a difference, if we weren't trying to hear a difference what would be the point of running a differential test in the first place?

G

But that is not my point, and I think I made it clear. I think double blind tests are susceptible to bias, which is skewed in one direction: towards allowing subjects to only get caught out when they think there is a difference and there isn't, not the other way around.
 
May 31, 2018 at 9:03 AM Post #36 of 51
But that is not my point, and I think I made it clear. I think double blind tests are susceptible to bias, which is skewed in one direction: towards allowing subjects to only get caught out when they think there is a difference and there isn't, not the other way around.
because we're testing the null hypothesis: "there is no audible difference". the all test is about finding people who can disprove it by showing they perceive a difference. and to talk like a pro in statistical technical jargon, F everybody who can't ^_^. if I pay 200 guys to pick at random, and you get your one guy to pass, the conclusion will still be that there was an audible difference.
IMO we can worry about negative bias for side projects and interpretations beyond the original null hypothesis but the test never guaranteed to answer the extra questions.

anyway, for MQA, even setting a blind test would be a massive undertaking. we can't really separate the DAC's special sauce from the format itself(same problem as DSD). we have no traceability for the original master and as even the same master will show up different, it makes it harder to be sure we're testing the same reference. it's really not one of those stuff we can test in foobar's abx module. and that's where I come saying that because it's harder for a consumer to properly test himself, said consumer needs to be even more skeptical than when wondering if 16/44 sounds like 24/192. something he can try to answer with abx by himself.
 
May 31, 2018 at 11:44 AM Post #37 of 51
But not hearing a difference in double blind listening tests can be caused by an expectation bias that no difference will be heard.

The whole point of a comparison test is trying your best to consistently discern a difference. If I do a careful test and something appears to be audibly transparent, then that is good enough to me. There may be a slight difference if I crank the volume or strain to hear some very specific sort of sound, but I really don't care. That's a molehill compared to the mountain of bias in the completely subjective impressions we see in audio forums.

If I can't hear it in carefully testing, it's close enough for government work for me.
 
Last edited:
May 31, 2018 at 11:59 AM Post #38 of 51
But that is not my point, and I think I made it clear. I think double blind tests are susceptible to bias, which is skewed in one direction: towards allowing subjects to only get caught out when they think there is a difference and there isn't, not the other way around.
The purpose of an ABX listening test is NOT to show that 2 separate files sound identical. It is used to show that a difference can be identified. If someone claims to be able to hear a difference, those are the people that should be attempting an ABX to verify if their claims are valid or not by removing the most obvious of known biases and to better isolate the listener's hearing.

My assumptions about audio come from measurements and what I have read about human perception and hearing. I don't base any claims solely on the results of failed ABX tests. ABX tests are used in situations where the question being asked might be, "Can anyone hear a difference?" Nobody would seriously consider using an ABX test to answer the quesion, "Are these the same?" That is were analysis and measurements are used.
 
May 31, 2018 at 6:57 PM Post #39 of 51
The purpose of an ABX listening test is NOT to show that 2 separate files sound identical. It is used to show that a difference can be identified. If someone claims to be able to hear a difference, those are the people that should be attempting an ABX to verify if their claims are valid or not by removing the most obvious of known biases and to better isolate the listener's hearing.

My assumptions about audio come from measurements and what I have read about human perception and hearing. I don't base any claims solely on the results of failed ABX tests. ABX tests are used in situations where the question being asked might be, "Can anyone hear a difference?" Nobody would seriously consider using an ABX test to answer the quesion, "Are these the same?" That is were analysis and measurements are used.

Then you should be replying to the people who said they did double blind tests to prove to themselves there is no difference, not me.
 
May 31, 2018 at 6:58 PM Post #40 of 51
The whole point of a comparison test is trying your best to consistently discern a difference. If I do a careful test and something appears to be audibly transparent, then that is good enough to me. There may be a slight difference if I crank the volume or strain to hear some very specific sort of sound, but I really don't care. That's a molehill compared to the mountain of bias in the completely subjective impressions we see in audio forums.

If I can't hear it in carefully testing, it's close enough for government work for me.

So you don't do double blind tests?
 
May 31, 2018 at 7:21 PM Post #41 of 51
When I'm evaluating equipment for my rig, single blind is good enough for my purposes. Double blind would be better if I was publishing the results. So would increasing the sample size and test subjects. But I don't need to split fractions that far. All I care about is if the new piece of equipment sounds different (and I haven't found one yet that does).

I think everyone who buys home audio equipment and is interested in improving fidelity should do some sort of controlled testing. Most audiophiles do none of that though. The amazing thing is that many audio equipment reviewers don't either.
 
Last edited:
May 31, 2018 at 8:17 PM Post #42 of 51
Then you should be replying to the people who said they did double blind tests to prove to themselves there is no difference, not me.
That still doesn't prove the files are the same, only that the person taking the ABX was unable to identify a difference. That doesn't help anyone else except the person taking the ABX.
 
May 31, 2018 at 8:50 PM Post #43 of 51
If everyone did tests everyone would know for themselves. I find that the people who argue the most about things are the ones who never went to the effort of finding out for themselves. Tests aren't that difficult. Everyone should figure out how to do them for themselves. If they did, they wouldn't wonder about whether 24/96 is better than AAC 256 VBR or if all their DACs sound the same. They would know for themselves. It's great to have scientists to figure out stuff for us, but it's even better to take those principles and find out how they apply to the real world by doing our own tests. It also helps a LOT to take the time to play with a sound editing program and figure out what frequencies and decibels actually sound like. Then when someone starts talking about the importance of 30kHz at -70dB, you know what they're talking about!
 
Last edited:
Jun 1, 2018 at 1:57 AM Post #44 of 51
again it's a matter of what we're trying to achieve. when I wonder if a lower rate encoding on my lossy files would annoy me(sound different with the assumption that different would be crap), I abx a bunch of tracks I think might be tricky, and a few of my favorite stuff. and if I fail on those, actual audible difference or not, I will conclude that it's good for me.
what I wouldn't do is claim that there is no audible difference on any track for any listener. because I don't have any mean to demonstrate that with a listening test.
 
Jun 1, 2018 at 12:39 PM Post #45 of 51
When I was testing that, I went through about a hundred tracks of all kinds over a period of two weeks. It was important to me because I was planning to encode my entire music library, which adds up to over a year and a half of music. I didn't want to get finished ripping tens of thousands of CDs and then discover that some of them might not have ripped as well as others. When I determined the point of transparency, I added VBR to allow for a little extra data rate if necessary and made for more efficient files. I can safely say that AAC 256 VBR is completely transparent for normal purposes. I don't expect everyone to take my word for it though. I expect them to do their own tests, and I'm confident they'll find out the same thing I did.

The thing I won't pay attention to is someone claiming to hear "night and day" differences when they don't even know the difference between codecs and refuse to do controlled listening tests because "tests lie".
 
Last edited:

Users who are viewing this thread

Back
Top