Head-Fi.org › Forums › Equipment Forums › Sound Science › A proposed optical digital cable test
New Posts  All Forums:Forum Nav:

A proposed optical digital cable test - Page 3

post #31 of 138
Quote:
Originally Posted by Shark_Jump View Post
How about splitting the light signal and testing two identical lengths of different toslink cables simultaneously.
One would need to be careful about how one does this. Distributing a signal down two conductors of different length could, depending on how/whether they're terminated, result in much decreased signal integrity. At least, this is the case with electrical conductors. It might be the same for optical conductors.
post #32 of 138
Quote:
Originally Posted by slim.a View Post
The sad thing it that those research papers were not conducted in a proper environment to make such definitive claims. Gladly, there are some scientists that are starting to apply a real scientific and investigative method to this audibility threshold problem. If you read the links below, you will see that when listening tests are conducted with the proper equipment, the human sensitivity to temporal resolution is far greater than previous scientists have assumed before.
Information for prospective students
http://www.physics.sc.edu/kunchur/pa...rge-Foster.pdf
These papers are really interesting but are not really conclusive because music is VERY different from a 7 kHz square wave from a signal generator reproduced by a high-end tweeter connected to a scientific grade amplifier in a highly damped room. Let's change the setup by using a musical signal and we're in a totally different ballpark. While our brain can be impressively accurate with a simple non-musical signal, a complex music piece is a much bigger burden to compute. All sorts of psyhcological factors might come to play, even in a DBT. Still, these papers are a great find.

Quote:
Originally Posted by slim.a View Post
I just hope that one day more “objectivists” will try to be more open minded and will try to understand why so many people find differences between cables instead of just repeating the same things over and over.
No DBT=no evidence. Those differences might or might not be placebo. A real objectivist doesn't discard differences as impossible to hear. A real objectivist just wants to know for sure. Subjective vague impressions based on a sighted listening test just isn't up to the task.


nick_charles, I suggest to skip everything else and DBT.
post #33 of 138
Quote:
Originally Posted by cer View Post
nick_charles, I suggest to skip everything else and DBT.
using a top range professional soundcard such as RME/Lynx, and an external DAC running the DIR9001 in an acoustically controlled room...or better a high end phone?

running the test on a soundblaster in a living room would be utterly pointless.
post #34 of 138
Skip the measurements? But that's what the experiment is for, not just dbt. It would be nice if he could be supplied with more sensitive testing equipment, but what if someone sends nick a dud and then claims he damaged it just to make a mess of everything? Better to make do with what one has than trust strangers with expensive equipment.
post #35 of 138
Quote:
Originally Posted by nick_charles View Post
I have not mentioned jitter at all in this thread, until now.
Well since you are going to compare properly functionning digital cables, wouldn't jitter be the main differenciating factor? (even if you didn't mention the name jitter)

Quote:
Originally Posted by nick_charles View Post
The Edirol merely acts as a pass-through taking a digital signal and passing it through as a USB stream. Unless you happen to know the jitter measurements for the Edirol you are speculating. Likewise my other components you do not know the jitter measurements.
Saying that the Edirol mearly passes data from spdif to usb is not a good justification for me. We all know that different usb to spdif converters have different jitter measurements. Why wouldn't be the case for the spdif to usb? Is it another assumption?

As you might know, people who claim to hear differences between digital cables also claim to hear to differences between different digital converters.
The Edirol is an entry level usb sound card. As far as I can see, it doesn't take any measure to minimize or lower jitter: I don't see any fancy power filtering and it doesn't a rock stable low jitter clock.

So, could you honestly tell that it is a false assumption to say that the Edirol has probably much higher jitter than an Audio Precision 2 system (or Lynx, DCS, ...)?
In my opinion, it would be a plain miracle if the Edirol had a low jitter.

Also, since you seem to like the scientific approach. Wouldn't be wiser to use a soundcard that will avoid arising suspicion and controversy?

Quote:
Originally Posted by nick_charles View Post
Any deviation in the frequency response between cables will show that there is a difference between cables, jitter shows up in sidebands or in random noise, both will affect the FR as will other added distortions.
Where did you learn that jitter has an affect on the frequency response? Is it a 100% correlation?

For the sake of good measurement, wouldn't be a better approach to measure more relevant parameters, perhaps related to the time domain performance?

Quote:
Originally Posted by nick_charles View Post
I have a very good measuring Entech 203.2, but this involves an extra D/A and then A/D set of steps which means that any variability in the A/D step could mask differences. I do not have the kit to measure other parameters.
If you don't have the tools to measure some relevant parameters, how useful is the measuring?
Is it just another measurement to say that all the cables should perform/sound similarly? How objective is that?

Quote:
Originally Posted by nick_charles View Post
If you read my first post again I was going to make samples from each cable available for others to test thru listening not assume that differences were inaudible, do not put words in my mouth.
You have stated many times that jitter is not audible below 1ns (Is that false?), so I assume that you have based many purchase decisions on that assumption.
So by recording different cables with your equipment, you are assuming it is transparent enough that it will show a difference if such a difference existed.
My point is that those recordings will only induce people in error. They predictably contain mostly jitter/distortion/or whatever from your recording chain and not from the cable you are trying to test.

As I suggested earlier, you should use equipment a few orders of magnitude better than what is necessary to measure the difference between those cables.

Quote:
Originally Posted by nick_charles View Post
I will read those papers and get back to you...
I hope they will bring you useful information. Those papers show that in order to test the human hearing treshold, scientists should use equipment beyond reproach.
If they had used a poor CD player to generate the 7 khz square wave and poor speakers, they would have ended up with very different results.

The same apply for jitter measurements. Those 1ns treshold figures were not conducted in a best case scenario. I am pretty convinced that if those jitter audibility tests were conducted in a better environment, we would have very different results.
post #36 of 138
Quote:
Originally Posted by cer View Post
These papers are really interesting but are not really conclusive because music is VERY different from a 7 kHz square wave from a signal generator reproduced by a high-end tweeter connected to a scientific grade amplifier in a highly damped room. Let's change the setup by using a musical signal and we're in a totally different ballpark. While our brain can be impressively accurate with a simple non-musical signal, a complex music piece is a much bigger burden to compute. All sorts of psyhcological factors might come to play, even in a DBT. Still, these papers are a great find.
Cer,

The goal was not to listen to the 7 khz for the sake of lisening to a square wave. But the goal was to test the human sensitivity to temporal resolution.

For example, when people say they hear a difference between 44.1 sampling frequency and 96 sampling, it is not because they can hear anything below 20K. If you do a frequency response on the 20hz-20khz, you could get the same result. However, if you measure other parameters such transient/impulse response (or phase...) you will find differences.

So while you seem to think that those test are inconclusive because music is more complex than a square wave. I say that it is the opposite.
Up until now, many measurements/tests were done with single tone sine waves. And people look generally at the 1khz thd spectrum or the FR graph. However, most of those were totally decorrelated from the listening experience of many people.
The reason was pretty simple those graphs/measurements only captured the frequency domain performance and didn't account for time domain performance.
By using a square wave, we are adding a little bit of complexity. And we realize that the temporal resolution of human hearing is far more important than we suspected.

Music as you said is a much more complex than a square wave. If anything those test prove that a lot more care has to be taken when doing A/B tests in music. We cannot just conduct A/B tests between 2 equipments (DACs, digital cables, ...) on a poor equipment and say that there are no differences.


Quote:
Originally Posted by cer View Post
No DBT=no evidence. Those differences might or might not be placebo. A real objectivist doesn't discard differences as impossible to hear. A real objectivist just wants to know for sure. Subjective vague impressions based on a sighted listening test just isn't up to the task.
In my opinion, a proper and serious DBT is hard to achieve. You will need to assemble a listening system beyond reproach in order to have significant results. When I am talking about a nice system I am refering to something like this even if it is overkill.

If done in less than a pefect system, with complex music, the differences might no be apparent immediately.
Whether you play a Beethoven symphony on iPod earbuds or in $500,000 speaker system, you will still recognize it. It is just much much harder to reconstruct the event in your head with the iPod than with a high end speaker systems.
Sometimes, if you have listened to a song hundreds of time on a resolving gear, and then switch to a lesser one, it takes a long time to start noticing what is missing. Our brain has a good ability to reconstruct less than perfect audio. That is how we can recognize a voice in a crappy telephone for example.

That is to say, that while I understand the scientific appeal for a DBT, I don't take for granted any DBT I come accross. Perfection is a rare thing and I highly doubt that those all of those DBT tests are perfect.

I am not saying that all subjective testing has to be considered seriously. There are many mistakes made. However, I believe that in a resolving system, and under certain conditions people can most certainly hear difference between transports, DACs, jitter, ...
post #37 of 138
Thread Starter 
Quote:
Originally Posted by slim.a View Post
Well since you are going to compare properly functionning digital cables, wouldn't jitter be the main differenciating factor? (even if you didn't mention the name jitter)
I make no such assumption.


Quote:
Saying that the Edirol mearly passes data from spdif to usb is not a good justification for me. We all know that different usb to spdif converters have different jitter measurements. Why wouldn't be the case for the spdif to usb? Is it another assumption?
The Edirol takes spdif and converts it to USB, it packages up the information into frames, not the other way around.


Quote:
As you might know, people who claim to hear differences between digital cables also claim to hear to differences between different digital converters.
Claim being the operative word, proven in serious tests is different.


Quote:
The Edirol is an entry level usb sound card. As far as I can see, it doesn't take any measure to minimize or lower jitter: I don't see any fancy power filtering and it doesn't a rock stable low jitter clock.
See above



Quote:
So, could you honestly tell that it is a false assumption to say that the Edirol has probably much higher jitter than an Audio Precision 2 system (or Lynx, DCS, ...)?
In my opinion, it would be a plain miracle if the Edirol had a low jitter.
It is an assumption, the kind you wanted me to avoid making ?


Quote:
Where did you learn that jitter has an affect on the frequency response? Is it a 100% correlation?
Benjamin and Gannon 1998, it shows graphically the distortion sidebands caused by sinusoidal jitter. Jitter amplitude is positively correlated with amplitude of distortion sidebands, it is not a 1:1 linear relationship but more jitter = more distortion, an increase of 10x rms value jitter = 20db increase in distortion.


Quote:
You have stated many times that jitter is not audible below 1ns (Is that false?), so I assume that you have based many purchase decisions on that assumption.
What I have done is cited work that indicates such and other blind tests elsewhere. When I have said jitter below n is inaudible please take it as shorthand for such, if I have been sloppy and not cited studies I apologise. The only item I own which has measured jitter figures (afaik) is my Entech which I got for $50 (shipped) from eBay, none of my other digital devices (streamers/DAC/CD players) have published jitter measurements.


Quote:
I hope they will bring you useful information. Those papers show that in order to test the human hearing treshold, scientists should use equipment beyond reproach.
The first one is interesting but what it is really showing is twofold, the effect of filters, removing content and incidentally the change between a near perfect square wave and something that is half-way between a sinewave and a square wave, i.e removing harmonics. I've DBT'd low pass filters myself this is trivial and easily testable within the bounds of 16/44.1 systems.

But even if what they say is correct, I want to reread it carefully first, I am not sure how that helps, since I use a CD player which cannot produce a perfect 7K square wave and in fact *nobody* has a CD player or SACD player or DVD-A player that can produce a perfect 7K square wave so any ability under these extreme conditions is moot.

Quote:
The same apply for jitter measurements. Those 1ns treshold figures were not conducted in a best case scenario. I am pretty convinced that if those jitter audibility tests were conducted in a better environment, we would have very different results.
Benjamin and Gannon used a low jitter source and added controlled amounts of jitter, the incipient jitter in the source was several orders of magnitude below difference thresholds.
post #38 of 138
Quote:
Originally Posted by nick_charles View Post
The Edirol takes spdif and converts it to USB, it packages up the information into frames, not the other way around.
So if I understand correctly what you said (I am no expert in the spdif to usb conversion), the Edirol is going to record the spdif data regardless of the timing (jitter)?
Do you mean that it will act as a "slave" to the cd player you are going to use? Do you imply that there is nothing in between the spdif connector and the usb input of the computer that will affect the signal? Not a single PLL, clock, ...?

Quote:
Originally Posted by nick_charles View Post
Claim being the operative word, proven in serious tests is different.
I don't say that claims are truth. However, if people claim stuff, we should at least take that into account. If those claims are not addressed, then the test will be inconlcusive if we just assume that they are not important.

Quote:
Originally Posted by nick_charles View Post
It is an assumption, the kind you wanted me to avoid making ?
Well, as I said, sometimes it is necessary to make assumptions in scientific tests.

You made the assumption that the Edirol was good enough.
I made the assumption that the Edirol was not good enough.

If you are wrong (I don't say you are), the test will be skewed and we will have most certainly bad results.
If I am wrong, we would have used an overkill equipment but the result would still be true.

Don't you honestly see the difference between the 2 assumptions?
One of them could jeopardize the integrity of the test (assuming that the Edirol is good enough) while the second assumption (using a better equipment with known stable clock) could at the worst case generate an overkill.

Since we don't know, I prefer to make safe assumptions and use equipment which is known to be orders of magnitude better than what it is supposed to measure. I just don't see the validity of the assumption of using unknown equipment (which at first sight appear to be of poor quality) and just hope for the best.
If my approach is not scientific please prove it to me.

Quote:
Originally Posted by nick_charles View Post
Benjamin and Gannon 1998, it shows graphically the distortion sidebands caused by sinusoidal jitter. Jitter amplitude is positively correlated with amplitude of distortion sidebands, it is not a 1:1 linear relationship but more jitter = more distortion, an increase of 10x rms value jitter = 20db increase in distortion.
.
You were suggesting to measure the FR (frequency response) and not different types distortions induced by jitter if I understand correctly your first post.

Did Benjamin and Gannon use a random generated jitter imbedded into the data or did they use different clocks with different jitter values?
As far as I know, random jitter is not as harmful as real jitter generated by poor equipment. (There is a device called JISCO that introduces random jitter to improve the listening experience, I am not saying it is the way to go but at the very least it seems that not everybody thinks random jitter is as bad as the other kind).
Also, jitter at low frequency is harder to reject by most digital receivers, PLLs than it is the case for high frequency jitter.

Did they make sure that the jitter they were using replicates real world jitter (clock phase noise of a poor clock, ...)?

Did they pay much attention to the quality of the test system? Did they just assume that all cables sound the same, all amplifiers sound the same, ...? Or did they take the same rigorous approach as the one in the article I gave the link to?

Quote:
Originally Posted by nick_charles View Post
But even if what they say is correct, I want to reread it carefully first, I am not sure how that helps, since I use a CD player which cannot produce a perfect 7K square wave and in fact *nobody* has a CD player or SACD player or DVD-A player that can produce a perfect 7K square wave so any ability under these extreme conditions is moot.
Well that is the whole point. Most CD players have a hard time producing anything complex in the high frequencies.
A good 24/96 DAC which uses preferably a R2R/Multibit dac chip with a good digital fitler, will go a long way into making those high frequency content more listenable.
I don't think it is pointless to talk about it simply because there many poor cd players and DACs.
Here you can see how a good digital filter+R2R dac chip can replicate a 8 khz square wave and sinewave. You will see that the slow roll-off feature allows a much better step response than the fast roll-off. However, with the slow roll off you loose 1db at 20khz in comparison with the fast roll off feature.
Of course, if you use the most common (and cheaper) sigma delta dac chips for the same measurement you will have very poor results.

So no, I don't believe that the 7 khz is just an isolated scientific test. With high resolution content (24/96) and high quality DACs, the test totally makes sense to me.

Quote:
Originally Posted by nick_charles View Post
Benjamin and Gannon used a low jitter source and added controlled amounts of jitter, the incipient jitter in the source was several orders of magnitude below difference thresholds.
How low jitter was their source? Did they use an Esoteric/DCS/Accuphase transport? Or perhaps Audio Precision 2? Or did they just assume that it was "low enough".
What was the rest of the equipment? Did use all the audiophool nonsense (power filtration, vibration control, high quality cables) that is known to have no effect on the listening experience? Did they use a zero negative feedback Vitus Audio amplifier or did they assume that it wasn't necessary since the same A/B class amp they had on hand had the same (or perhaps better to them) measurements?
post #39 of 138
Thread Starter 
Quote:
Originally Posted by slim.a View Post
So if I understand correctly what you said (I am no expert in the spdif to usb conversion), the Edirol is going to record the spdif data regardless of the timing (jitter)?
Do you mean that it will act as a "slave" to the cd player you are going to use? Do you imply that there is nothing in between the spdif connector and the usb input of the computer that will affect the signal? Not a single PLL, clock, ...?
The signal goes from the transport via SPDIF to the digital input as a 16/44.1stream - no upsampling is enabled on the Edirol, the Edirol then just packages the data i.e it takes the data and puts it into frames with USB headers and then pumps it out to the PC which then disassembles the packets and extracts the digital audio data. Not being a USB expert I defer to others to comment on any timing implications of the bundling/extraction procedure. My aim was to have as transparent a pass-through as possible


Quote:
You were suggesting to measure the FR (frequency response) and not different types distortions induced by jitter if I understand correctly your first post.
Yes, I aim to look at the FRs but we know that jitter alters the FR by adding sidebands so the once pristine response will have spikes if jitter is high enough, trying another way, you could take a "perfect" digital rip to wav and use that as a reference then compare against the transmitted signal wav - any grotesque or even subtle differences caused by the transmission will be obvious.


Quote:
Did Benjamin and Gannon use a random generated jitter imbedded into the data or did they use different clocks with different jitter values?
Really, you need to get a copy of this paper $20 from the AES. They used sinusoidal jitter i.e signal-correlated jitter not random jitter, random jitter is much much less damaging as you say (Ashihara et al used random jitter and so their figures are much higher...100s of ns)


Quote:
How low jitter was their source? Did they use an Esoteric/DCS/Accuphase transport? Or perhaps Audio Precision 2? Or did they just assume that it was "low enough".
Their source was measured at 80ps rms - they had good measurement resources and jitter creation resources, this was two chaps from Dolby labs with all their resources, not me and my $70 ADC
post #40 of 138
Quote:
Originally Posted by nick_charles View Post
The signal goes from the transport via SPDIF to the digital input as a 16/44.1stream - no upsampling is enabled on the Edirol, the Edirol then just packages the data i.e it takes the data and puts it into frames with USB headers and then pumps it out to the PC which then disassembles the packets and extracts the digital audio data. Not being a USB expert I defer to others to comment on any timing implications of the bundling/extraction procedure. My aim was to have as transparent a pass-through as possible
As I said earlier, I am no expert in spdif to usb conversion, but I suspect that there will be timing issues.
In fact, if there were no timing issues, wouldn't all the digital cables/sources measure the same as long as they are bit perfect?
What would be the relevance of such a test?
But as you said the opinion of (real) usb experts would be intersting to know.

Quote:
Originally Posted by nick_charles View Post
Yes, I aim to look at the FRs but we know that jitter alters the FR by adding sidebands so the once pristine response will have spikes if jitter is high enough, trying another way, you could take a "perfect" digital rip to wav and use that as a reference then compare against the transmitted signal wav - any grotesque or even subtle differences caused by the transmission will be obvious.
Personally, all the jitter measurements I have seen that use the FR were done at the analog output of the DAC. And all the jitter measurement done at the digital level was usually done by looking at the eye patterns of digital streams (like the usb converters here : Stereophile: Bel Canto USB Link 24/96 USB-S/PDIF converter

Personally, I don't see the relevance of measuring the FR at the digital outputs.
If the test show that a difference exists it doesn't necessarily means there is an audible difference. The DAC might actually reject that specific jitter (through its digital section).
If there is no difference, then we could also wonder if the FR test was enough to show the existence of the difference.
Either way, the test would be inconclusive (in my opinion) from a measurement point of view.


Quote:
Originally Posted by nick_charles View Post
Really, you need to get a copy of this paper $20 from the AES. They used sinusoidal jitter i.e signal-correlated jitter not random jitter, random jitter is much much less damaging as you say (Ashihara et al used random jitter and so their figures are much higher...100s of ns)
I will probably have to buy it. I have only read excerpts of it.
However, I can't help but question the fact that used sinusoidal jitter. Is it the same kind of jitter as the one generated by jittery clocks and poorly designed circuits (in the transport side)? Does it include a similar pattern to the phase clock noise found on poor transports and line transmitters?


Quote:
Originally Posted by nick_charles View Post
Their source was measured at 80ps rms - they had good measurement resources and jitter creation resources, this was two chaps from Dolby labs with all their resources, not me and my $70 ADC
Maybe they should have used a 1ps source? ... I am just kidding . I should buy the paper and get more info on the equipment they used.
post #41 of 138
Quote:
Originally Posted by Uncle Erik View Post
Unlike the credulous believers, fanatics and paid shills, I welcome testing and will accept the results, whatever they may be. Shame that can't be said for those whose income depends on cables or those who blindly accept marketing claims as true.

If test results show a positive difference, I'll buy cables that make an improvement. I fully support Nick's tests, as should anyone else.
I guess if the shoe fits, wear it.
post #42 of 138
Quote:
Originally Posted by Guidostrunk View Post
The anti-cable trolls will be arriving shortly to tell you it's all a waste of time, and tell you regardless of your findings you will hear no differences.Good luck.
but it's us anti cable trolls who always insist on scientific, provable, measurements!

so please, cut the lying, the straw-man crap, and let nick do his (very welcomed by anti cable trolls) experiments, k?
post #43 of 138
Quote:
Originally Posted by b0dhi View Post
One would need to be careful about how one does this. Distributing a signal down two conductors of different length could, depending on how/whether they're terminated, result in much decreased signal integrity. At least, this is the case with electrical conductors. It might be the same for optical conductors.
Sure, but you can check to see if this is an issue as part setting up the experiment. Just like you should for the other components of the analysis set up.
post #44 of 138
I applaud your efforts and I'll certainly be watching this space.
post #45 of 138
Quote:
Originally Posted by Shark_Jump View Post
Sure, but you can check to see if this is an issue as part setting up the experiment. Just like you should for the other components of the analysis set up.
Yes, this is what I was advocating.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Sound Science
Head-Fi.org › Forums › Equipment Forums › Sound Science › A proposed optical digital cable test