Don't get why "Audiophile" USB Cable would improve sound quality
Aug 1, 2011 at 8:11 PM Post #616 of 835

The big wild guess is that jitter will cause audible distortion

 
The jitter matter has been beaten up to death.
 
I haven't claimed anything, I'm mostly asking what experiments you've conducted in order to reach those very interesting conclusions of yours...and it would appear that you're mostly guessing how a USB audio controller and a DAC chip work together, and haven't done any homework whatsoever. We all know that USB is just a bunch of 0's and 1's, don't we.
 
Aug 1, 2011 at 8:15 PM Post #617 of 835


Quote:
 
The jitter matter has been beaten up to death.
 
I haven't claimed anything, I'm mostly asking what experiments you've conducted in order to reach those very interesting conclusions of yours...and it would appear that you're mostly guessing how a USB audio controller and a DAC chip work together, and haven't done any homework whatsoever. We all know that USB is just a bunch of 0's and 1's, don't we.


...USB is just a bunch of 1s and 0s?  There's no in-between here...
 
 
Aug 1, 2011 at 8:18 PM Post #618 of 835
> The jitter matter has been beaten up to death.
 
Exactly. Which is why I'm trying to talk about the discretely provable matter: Whether a digital signal across a cable can go in one end as waveform A, and emerge on the other as the completely differently characterized waveform B with boosted bass. Sorry, this doesn't happen, and it's not hard to prove formally, even taking signal error and jitter into account.
 
> I haven't claimed anything, I'm mostly asking what experiments you've made in order to make those statements...
 
You don't understand. A digital waveform is numeric data which either arrives intact, or with errors or missing data. In both cases (errors, and missing data), it's easy to prove mathematically that this will NOT produce, for example, a bass boost, under any circumstances (barring a probability which likely rivals the chances of spontaneous formation of living cells from raw materials).
 
Again, it's not particularly difficult (maybe tedious) to prove mathematically that errors introduced into a digital stream won't magically cause the bass to be boosted. It just doesn't work that way, period! To say otherwise is like saying two times two is not four.
 
If you really want, I could write out a formal proof that random errors and skipped samples and even jitter, no matter how severely introduced into a digital stream, will not produce a bass boost effect. But then again, it will make no sense unless you've taken at least undergrad (college) level discrete mathematics, probability theory, and calculus. Those of you who do understand what I'm talking about here fully will know how to prove it yourself anyway, and most likely are on the non-believer side to begin with.
 
Aug 1, 2011 at 9:09 PM Post #620 of 835
 
Quote:
> The jitter matter has been beaten up to death.
 
Exactly. Which is why I'm trying to talk about the discretely provable matter: Whether a digital signal across a cable can go in one end as waveform A, and emerge on the other as the completely differently characterized waveform B with boosted bass. Sorry, this doesn't happen, and it's not hard to prove formally, even taking signal error and jitter into account.
 
> I haven't claimed anything, I'm mostly asking what experiments you've made in order to make those statements...
 
You don't understand. A digital waveform is numeric data which either arrives intact, or with errors or missing data. In both cases (errors, and missing data), it's easy to prove mathematically that this will NOT produce, for example, a bass boost, under any circumstances (barring a probability which likely rivals the chances of spontaneous formation of living cells from raw materials).
 
Again, it's not particularly difficult (maybe tedious) to prove mathematically that errors introduced into a digital stream won't magically cause the bass to be boosted. It just doesn't work that way, period! To say otherwise is like saying two times two is not four.
 
If you really want, I could write out a formal proof that random errors and skipped samples and even jitter, no matter how severely introduced into a digital stream, will not produce a bass boost effect. But then again, it will make no sense unless you've taken at least undergrad (college) level discrete mathematics, probability theory, and calculus. Those of you who do understand what I'm talking about here fully will know how to prove it yourself anyway, and most likely are on the non-believer side to begin with.

 
Isn't jitter errors in the timing of the information in the signal?  =\
 
Aug 1, 2011 at 9:20 PM Post #621 of 835
As I understand it, yes. If you're lazy and don't implement a data buffer + local clock for your DAC, then the jitter would be applied directly to the DAC output, and therefore the analog waveform output could contain this jitter. If the clock is considerably higher than the range of human hearing though, jitter should not be an issue, but again I'm not an expert here.
 
Anyway at most, jitter might cause high-frequency (sub-clock timing) distortion to the output waveform. Never anything that would even come close to effecting the bass, for example.
 
Aug 1, 2011 at 9:25 PM Post #622 of 835
 
Quote:
Any jitter that there may be won't be introduced by the cables but by the USB controllers

Actually, I am trying to get you to justify that statement.  Because it simply isn't true.  Let me refer you to the Wikipedia page (because it is convenient, not because it is an absolute reference) here.  In the paragraph "Sampling jitter" it states "less than a nanosecond of jitter can reduce the effective bit resolution of a converter with a Nyquist frequency of 22 kHz to 14 bits"  I think it means 16 bits, not 14, as it is referring to CD playback.  Where does <1ns come from?  Well, when you convert a digital signal to analog, the thing to bear in mind is this:  The right signal at the wrong time is the wrong signal.  So, if a digital data stream contains 16-bit data, this means that the data is specified with a resolution of one part in 65,536.  It is straightforward to appreciate that this means that the sample has to be converted from the digital domain to the analog domain at a point in time which is accurate to within 1/65,536th of the sampling interval.  The sample frequency is 44.1kHz, so the sampling interval is 22.7 microseconds.  Therefore, if the DAC timing signals are wrong to within 1/65,536th of 22.7 microseconds (346 picoseconds), then the analog signal will be wrong.  This is what we mean by jitter.  If the digital data stream is to be systematically accurate to within 346ps, and we assume that the USB controller at the computer is perfect, and presents a source signal with no jitter, this implies that the bandwidth of the signal delivery system - the USB cable - needs to be close to 1GHz.  That is non-trivial.  I think it is not unreasonable to postulate that different USB cables can have different transmission characteristics in the GHz frequency range, and thereby can have - in principle - an audible effect on red book music.
 
Now lets move on to something like 24/192.  If we apply the same rationale, then the jitter requirement becomes 310 femtoseconds.  A femtosecond is a millionth of a billionth of a second.  And the bandwidth required to systematically guarantee jitter-free transmission is a thousand GHz.  USB cables do NOT transmit those frequencies.  Do you still insist that a USB cable CANNOT POSSIBLY impact the sound of the resultant analog signal?  Yes or no?
 
Aug 1, 2011 at 9:27 PM Post #623 of 835


 


 


Quote:
 
Anyway at most, jitter might cause high-frequency (sub-clock timing) distortion to the output waveform. Never anything that would even come close to effecting the bass, for example.


... and yet when most folks try out an async transport or dac you almost always see that the first remark is something like "better defined bass." This would include myself. In addition to turning the volume down.
Why is that?
 
 
Aug 1, 2011 at 9:44 PM Post #624 of 835


Quote:
As I understand it, yes. If you're lazy and don't implement a data buffer + local clock for your DAC, then the jitter would be applied directly to the DAC output, and therefore the analog waveform output could contain this jitter. If the clock is considerably higher than the range of human hearing though, jitter should not be an issue, but again I'm not an expert here.
 
Anyway at most, jitter might cause high-frequency (sub-clock timing) distortion to the output waveform. Never anything that would even come close to effecting the bass, for example.


Probably why the TAS1020B has its own data buffer built-in.  The human-hearing range has absolutely nothing to do with the clock.
 
 
Aug 1, 2011 at 9:51 PM Post #625 of 835
Quote:
... and yet when most folks try out an async transport or dac you almost always see that the first remark is something like "better defined bass." This would include myself. In addition to turning the volume down.
Why is that?


Take your pick.
 
A few of my favorites for this situation are bandwagon effect, expectation bias, hindsight bias, and suggestibility.
 
There is absolutely nothing wrong with being fooled by any of these.
 
Aug 1, 2011 at 9:54 PM Post #626 of 835


Quote:
 
Actually, I am trying to get you to justify that statement.  Because it simply isn't true.  Let me refer you to the Wikipedia page (because it is convenient, not because it is an absolute reference) here.  In the paragraph "Sampling jitter" it states "less than a nanosecond of jitter can reduce the effective bit resolution of a converter with a Nyquist frequency of 22 kHz to 14 bits"  I think it means 16 bits, not 14, as it is referring to CD playback.  Where does <1ns come from?  Well, when you convert a digital signal to analog, the thing to bear in mind is this:  The right signal at the wrong time is the wrong signal.  So, if a digital data stream contains 16-bit data, this means that the data is specified with a resolution of one part in 65,536.  It is straightforward to appreciate that this means that the sample has to be converted from the digital domain to the analog domain at a point in time which is accurate to within 1/65,536th of the sampling interval.  The sample frequency is 44.1kHz, so the sampling interval is 22.7 microseconds.  Therefore, if the DAC timing signals are wrong to within 1/65,536th of 22.7 microseconds (346 picoseconds), then the analog signal will be wrong.  This is what we mean by jitter.  If the digital data stream is to be systematically accurate to within 346ps, and we assume that the USB controller at the computer is perfect, and presents a source signal with no jitter, this implies that the bandwidth of the signal delivery system - the USB cable - needs to be close to 1GHz.  That is non-trivial.  I think it is not unreasonable to postulate that different USB cables can have different transmission characteristics in the GHz frequency range, and thereby can have - in principle - an audible effect on red book music.
 
Now lets move on to something like 24/192.  If we apply the same rationale, then the jitter requirement becomes 310 femtoseconds.  A femtosecond is a millionth of a billionth of a second.  And the bandwidth required to systematically guarantee jitter-free transmission is a thousand GHz.  USB cables do NOT transmit those frequencies.  Do you still insist that a USB cable CANNOT POSSIBLY impact the sound of the resultant analog signal?  Yes or no?



I will not insist that a USB cable cannot possible impact the analog signal - however, I will insist that, until I see actual measurements, the differences will not be audible, and maybe not even (easily) measurable.  Not talking about jitter alone here - just the jitter added by a crappy (i.e. a normal, non-audiophile, apparently) USB cable.
 
Anyway, you can give all of the theoretical math you want, but things like that rarely translate into real-life so smoothly when we're dealing with the physical world...and you do have to realize that if you're using a 24/192 signal, even if you cut half of that signal out, most people aren't gonna be able to tell the difference since that's nearing the edge of inaudibility.  And if the odd bit here and there are corrupted, sure, the analog signal will be changed...but it will be practically impossible to measure the change.


Quote:
 



... and yet when most folks try out an async transport or dac you almost always see that the first remark is something like "better defined bass." This would include myself. In addition to turning the volume down.
Why is that?
 


 
...and we're talking about different people using different hardware here?  If they have to turn the volume down it means that the devices aren't volume matched and therefore the listening test is irrelevant.  If you're using two different DACs, one asynchronous and one standard run-of-the-mill one...well, they're different pieces of hardware.
 
I cannot, however, really comment on this unless I know exactly what you're talking about.  Context has meaning, you know.
 
Aug 1, 2011 at 10:26 PM Post #627 of 835


Quote:
Take your pick.
 
A few of my favorites for this situation are bandwagon effect, expectation bias, hindsight bias, and suggestibility.
 
There is absolutely nothing wrong with being fooled by any of these.



Heh. I don't want to believe. I want things to be very simple and all digital stuff to be the same, no matter what. Yet there are noticeable differences. I'm talking about listening to things for weeks or months, not just A/B A/B testing. It takes my ears and brain weeks sometimes to acclimate to anything different.
 
I believe noone. I listen without thinking or analyzing. 
 
 
Aug 1, 2011 at 10:28 PM Post #628 of 835


Quote:
 



 
...and we're talking about different people using different hardware here?  If they have to turn the volume down it means that the devices aren't volume matched and therefore the listening test is irrelevant.  If you're using two different DACs, one asynchronous and one standard run-of-the-mill one...well, they're different pieces of hardware.
 
I cannot, however, really comment on this unless I know exactly what you're talking about.  Context has meaning, you know.


Every single part of the system is the same right down to the song. The only change is the USB to SPDIF converter. This past June makes one year comparing.
 
 
 
Aug 1, 2011 at 10:31 PM Post #629 of 835
From a certain blog people don't like:
______________________________________________________________________________________________________________________
 
USB POWER IS NOISY: Many of the reasonably priced USB DACs are USB powered. The USB power bus suffers from lots of noise. The very wires that deliver the DC power are right next to high speed noisy data signals bundled into the same cable. It's also power shared with other USB devices on the system which may even include things like RF Bluetooth or WiFi "dongles" which add RF noise.

You may find reviews that talk about a USB DAC being noisy on one PC and quiet on another. Or even noisy only at certain times when the PC is doing certain things. This isn't uncommon as the amount of noise a PC generates can vary a lot. And any RF devices--like WiFi or Bluetooth--send their signals intermittently so the may generate what seems like random noise.

A good USB DAC may have it's own power supply that attempts to filter out or isolate the DAC from the noisy USB power, but this is hard to do cheaply. And it's hard to do well in a physically small product where the noisy power circuitry is physically only a few millimeters away from the sensitive audio circuitry. But it can be done.
________________________________________________________________________________________________________________________
 
i.e. via the USB controller, not the cable.
 
y'know all electrical cables follow set standards for a reason.
 
Aug 1, 2011 at 10:37 PM Post #630 of 835
Quote:
Heh. I don't want to believe. I want things to be very simple and all digital stuff to be the same, no matter what. Yet there are noticeable differences. I'm talking about listening to things for weeks or months, not just A/B A/B testing. It takes my ears and brain weeks sometimes to acclimate to anything different.
 
I believe noone. I listen without thinking or analyzing. 


See, now I'm picking up on some "bias blind spot".
 
Just because you don't want to believe someone, or think you don't believe someone, doesn't mean you don't believe someone.
 
That sentence is a beautiful mess of negatives.
 

Users who are viewing this thread

Back
Top