bfreedma
The Hornet!
- Joined
- Feb 3, 2012
- Posts
- 3,389
- Likes
- 2,798
Perhaps you have a short memory......
There was a time when "nobody could tell the difference between a cylinder recording and a live performer".
Then people insisted that vinyl "was so close to perfect that there was no point in looking for improvement".
Then we were told that most people couldn't tell "is it live or is it Memorex" (referring to cassettes).
Then there was a time when "most people were sure that 128k MP3 files were audibly perfect".
(Note that the developers of the MP3 compression process never made claims beyond that "most listeners" wouldn't notice a difference with "most music".)
I agree.... there's nothing to suggest that the methodology itself is flawed.
However, there really have NOT been "comprehensive widespread tests".
(It seems reasonable to suggest that, at least for now, no single group has both the resources and the inclination to perform those tests.)
I should also note something about human nature - which is that we learn and evolve in our ability to recognize things.
In one very early test, an audience was unable to tell the difference between a live performer and a cylinder recording.
HOWEVER, it is important to note that the audience who participated in that test had no experience whatsoever with recorded music... having only ever experienced live performances.
To them, that poor quality recording was 'the closest thing they'd ever heard to a live performance - other than a live performance".
A modern audience would have been quick to notice the surface noise, ticks and pops, and distortion of the cylinder recording as "obvious hints that it was a record".
In short, we have LEARNED that ticks, pops, and hiss are artifacts often associated with mechanical recordings like vinyl records.
This strongly suggests an interesting avenue of research.
After doing careful tests to determine whether listeners can detect differences between lossless and lossy compressed files (using a particular level and sort of compression).
We should take one group of listeners and "teach them the differences".
This would be accomplished by allowing them to listen to both versions of several different files - while pointing out the differences that exist "so they know what to listen for".
("Here's what those two files look like on an oscilloscope. Do you see the differences? Do you hear a difference that seems to correlate with the difference you see?")
We should then re-run the test, to find out whether our "taught" group has in fact LEARNED how to better notice and recognize the differences between the files.
We could then perform a double blind test to determine whether our "taught" group has actually LEARNED to be more accurate in distinguishing lossy files - or not.
We aren't born knowing how to tell a counterfeit painting from an original - doing so is a skill that we learn - and that some people have a particular aptitude for while others don't.
And, for those of us who lack that skill, the differences noticed by skilled experts are often invisible or very difficult to detect until they are pointed out to us.
Why would we assume that the ability to recognize the small differences caused by lossy compression shouldn't have a similar characteristic?
So many words and so little actual refutation. Wax cylinders, alledged testing from the 1920s with no references, and 70s advertising slogans - seriously?
Still waiting for you to present actual evidence rather than blindly lobbing grenades in the hopes of actually hitting something. It’s almost as if you have a financial stake in avoiding the data available from existing testing...