Originally Posted by nick_charles
A while back I did a series of crude tests on Analog audio cables. I am considering a set of tests on Toslink optical cables and am petitioning opinions on my protocol.
Western Digital HDTV optical output or
Marantz CC4300 CD player optical output
Edirol UA-1EX USB card with optical (Mini-Toslink) digital in
- Freebies and current cheapies
- Monoprice basic
- Monoprice premium
- Two $50 - $60 Toslink cables (I'll buy these)
Toslink to Mini-Toslink adaptor
Lenovo Y710 Ideapad 4GB RAM, Windows Vista 32 bit
Audacity 1.2.4 recording software
CD or WAV files
With each cable : same process.
Record a 1 minute segment from the same track/tracks as a digital recording at 16/44.1 at digital max recording level, record a 2nd sample of the same 1 minute , trim and align so that sample 1 and sample 2 are the same length and time-aligned, repeat for samples 3 thru 10.
Analyze the frequency response for all 10 samples 1024 or 2048 FFT. Import the data into Excel, average the results.
Plot the differences between each cables data points.
Choose a sample from each set with the *least* deviation from the mean, uses these as DBT samples, post here for members to DBT.Suggestions ?
Since you asked for suggestions, here is my opinion on the proposed test.
I noticed that there some flaws in your test protocol that will undoubtedly reveal no difference whatsoever between the tested cables.
First, you are assuming that your test equipment will let you reveal any difference between those optical cables if such difference existed.
You make the assumption that jitter is not audible below 1 ns, so you don’t use all the ultra-low jitter nonsense equipment used by many “audiophools”.
However, if you want to scientifically prove that there is no difference between optical cables, you would have to use test equipment beyond reproach (even if it doesn’t make sense to you).
In my opinion, a sound card such as the “Edirol UA-1EX USB” (as well as the other two devices you mentioned) is not low jitter enough to show a significant difference between digital cables.
To make a significant test, you would have to use something like the Audio Precision System 2 used by Stereophile or at the very least some professional sound cards that are known to have stable clocks (Lynx Aurora or Prism).
Stereophile has already measured years ago the difference between digital cables.
Second, analyzing the frequency response won’t prove or disprove anything in my opinion. It will be much more interesting if you could measure other parameters that can have a big effect on the sound.
Since we don’t listen at the bits on the digital cable but at the analog output of the DAC (where the jitter shows up). Here is what I suggest.
a)Choose a clean measuring non-upsampling DAC;
b)Select a few budget optical cables as well as reference class glass optical cables;
c)Measure the analog output DAC from a low jitter source (Audio Precision 2, DCS, Lynx Aurora, Prism ...) using a short reference class coaxial cable (such as the Stereovox XV2);
d) Measure the analog output of that DAC with the optical cables.
And for the tests, it would be nice to not only measure the frequency response but also impulse responses, IMD, jitter...
If you do the following test and find nothing significantly different then I will believe that all the Optical cables should sound the same (be it a $1 plastic cable or $1000 super high end glass cable).
However, since I know you pretty well, you are probably going to pretext that even if such a measurable difference existed it would be below the audible range of human hearing and you are probably going to link to a few research papers (it is what you have done countless of time).
The sad thing it that those research papers were not conducted in a proper environment to make such definitive claims. Gladly, there are some scientists that are starting to apply a real scientific and investigative method to this audibility threshold problem. If you read the links below, you will see that when listening tests are conducted with the proper equipment, the human sensitivity to temporal resolution is far greater than previous scientists have assumed before.Information for prospective studentshttp://www.physics.sc.edu/kunchur/pa...rge-Foster.pdf
I understand that it is sometimes necessary to make assumptions in a scientific research, however when you start making too much assumptions there is a risk in getting skewed results.
So of course, if the only difference between two cables is 0.1 db in the frequency domain, it would be inaudible to humans. However, by measuring only the frequency response, you are not measuring the temporal response and you are making the assumption that it doesn’t matter.
It is really saddening that people who call themselves objectivists seem to be in fact the most subjectivists of all. They live by a certain set of defined parameters and they don’t try to question themselves. Throughout history scientists has been wrong on many subjects and newer theories have replaced older ones. I just hope that one day more “objectivists” will try to be more open minded and will try to understand why so many people find differences between cables instead of just repeating the same things over and over. In my personal opinion, “objectivists” are the one who are living in a placebo world because they try to believe what some very limited set of measurements tell them despite what their sense tell them.