KeithEmo
Member of the Trade: Emotiva
- Joined
- Aug 13, 2014
- Posts
- 1,698
- Likes
- 868
You've got several good points there.....
To many of us, based on the technical details, the "obvious default assumption" is that files processed with lossy compression will sound very different.
After all:
- "by the numbers" a significant portion of the data is in fact being discarded
- "by the pictures" the differences between a lossy file and the original are quite obvious on an oscilloscope or audio editor
- there is a long history of claims of things being "audibly identical" which have turned out to be untrue
- as a broad generalization, most audiophiles believe that most differences, especially those that are easily measurable, are likely to be audible
- some of us are made uncomfortable by not having what I would term "wide safety margins" on various things including audio files
(even if we believe that something is functionally just as good we are still "more comfortable" with something that is "twice as good" than with something that is "just good enough")
And, yes, lack of conclusive proof is going to run afoul of "ego".
Tell a dedicated audiophile that "only 150 people on the entire planet can hear a difference" and, odds are, he will be convinced that he's one of the 150 who can....
And you are not going to convince him that this is unlikely using statistics.
(Face it, if most people found statistics compelling, then very few people would gamble.)
I should also point out that, from a PR point of view. his test has serious issues..........
Many people are interested learning something new - as long as it is both certain and useful.
However, far less people are interested in what they see as a chance to be proven wrong, with no significant upside.
And, no, most people do not see the size difference between lossless and lossy files to be a significant benefit.
I should also point out a "business case situation".....
Assume that a current album can be purchased on CD for $15 or as 320k AAC for $10.
Now, for the sake of argument, assume someone were to prove, beyond any doubt, that nobody on Earth could hear the difference.
Do you honestly believe that everyone will continue to purchase AAC files for $10 an album?
If that were to happen then sales of CDs might be discontinued... or they might not.
However, the price of AAC files, now known to be just as good as CDs, would be raised to the same $15 price.
(The price of a CD, or a file download, is set by what the license owner decides they need to charge - and has little to do with production cost.
if a 320k AAC file is perceived as being "just as good as a CD" they're going to expect to be paid the same amount for it. )
To many of us, based on the technical details, the "obvious default assumption" is that files processed with lossy compression will sound very different.
After all:
- "by the numbers" a significant portion of the data is in fact being discarded
- "by the pictures" the differences between a lossy file and the original are quite obvious on an oscilloscope or audio editor
- there is a long history of claims of things being "audibly identical" which have turned out to be untrue
- as a broad generalization, most audiophiles believe that most differences, especially those that are easily measurable, are likely to be audible
- some of us are made uncomfortable by not having what I would term "wide safety margins" on various things including audio files
(even if we believe that something is functionally just as good we are still "more comfortable" with something that is "twice as good" than with something that is "just good enough")
And, yes, lack of conclusive proof is going to run afoul of "ego".
Tell a dedicated audiophile that "only 150 people on the entire planet can hear a difference" and, odds are, he will be convinced that he's one of the 150 who can....
And you are not going to convince him that this is unlikely using statistics.
(Face it, if most people found statistics compelling, then very few people would gamble.)
I should also point out that, from a PR point of view. his test has serious issues..........
Many people are interested learning something new - as long as it is both certain and useful.
However, far less people are interested in what they see as a chance to be proven wrong, with no significant upside.
And, no, most people do not see the size difference between lossless and lossy files to be a significant benefit.
I should also point out a "business case situation".....
Assume that a current album can be purchased on CD for $15 or as 320k AAC for $10.
Now, for the sake of argument, assume someone were to prove, beyond any doubt, that nobody on Earth could hear the difference.
Do you honestly believe that everyone will continue to purchase AAC files for $10 an album?
If that were to happen then sales of CDs might be discontinued... or they might not.
However, the price of AAC files, now known to be just as good as CDs, would be raised to the same $15 price.
(The price of a CD, or a file download, is set by what the license owner decides they need to charge - and has little to do with production cost.
if a 320k AAC file is perceived as being "just as good as a CD" they're going to expect to be paid the same amount for it. )
how and why we trust others will change from people to people. I'm going to guess that the friends(true friends) you trust completely by default, are small in number and that they have consistently given you reasons to trust them over the years. so they have demonstrated that you could trust them, if not on that specific topic, but on many others making you think they will tell it like it is this time too. my analogy is bad(as usual^_^) in that respect because friends are people we already know a lot about.
bigshot or me or whoever is going to be biased. that's not a possibility, it's a certainty. just like any testing method worth something is going to introduce some variable that has nothing to do with sitting in a chair relaxed and enjoying music. we agree on that much. a test is supposed to answer a specific question. that's where many conversations are lost here, because the question seems to change as the discussion goes on, and clearly that shouldn't happen. instead a test should have some notion of dependent and independent variables, and we're expecting results about those specific variables. the results are statistical anyway and we can easily lower the degree of confidence we put in the data based on the test itself and what we know or don't know about it.
bigshot's test does not try to test audibility under the best conditions, so it's more likely to fail his test than one specifically made to try and pass. I don't think he tried to say otherwise or misled people about that. now his interpretation of the results are that if you don't notice something with the different files playing one after the other, you will have even fewer chances to notice something wrong while casually listening to an album. and I tend to agree with that. I also tend to agree with him asking those who claim to be able to hear the difference, to demonstrate that they can. because as always, a legion of people failing aren't going to be as conclusive as a few guys showing they indeed can pass a blind test. if bigshot's test is too hard, other methods are available. and if no controlled test allows to pass, then I think it becomes important to ask why would anybody believe that he can hear the difference? because a all lot of assumptions here are not born on confirm audibility, but instead on the assumption that is something is objectively changed, then the possibility that it will have an audible impact remains. and it's not an irrational thought, but it's also not a fact based decision and shouldn't try to pass for one IMO.
TBH what annoys me is how readily everybody will accept the validity of a test when it's something super obvious that agrees with what they feel(like I've never seen anybody contesting the results of a blind test with one file 15dB louder than the other), and how everything must somehow automatically be full of unconfirmed flaws and stuff we forgot to integrate into the test, as soon as the results don't agree with what our guts tell us. I'm not going to argue that our guts are always wrong or that any blind test is perfectly set without flaw, but statistically, how often will the blind test give the less accurate result? even only considering the most amateurish blind tests, I'm convinced we wouldn't come close to 50/50. to me it's obvious that what really motivates us to want more flaws in blind tests that don't agree with us, is our ego, not the desire for truth.