We are not talking about corrupted data or better error correction. If this was happening, the music would be unlistenable. In most decent systems, no data is lost in transmission, it gets through to the DAC intact.
Correct, although even in cheap systems generally “no data is lost in transmission”.
However, there is, superimposed on the data signal itself, noise consisting of jitter and phase noise.
There is NOT Phase Noise and Jitter Noise super imposed on the data signal itself! Firstly, you cannot have phase noise and jitter noise because they are just different terms for the same thing. Phase Noise is the term used for Jitter in the radio engineering world, so it’s jitter OR Phase noise, not both. Secondly:
While this does not affect the integrity of the data, it most certainly does affect the conversion accuracy of the DAC by subtly altering the triggering threshold of the D to A process.
Jitter can indeed affect the conversion accuracy and this is where the noise (and distortion) is created, it is the result of that conversion inaccuracy. IE. Jitter Noise is the result of the timing inaccuracy in the sample rate clock signal at the point of conversion.
This is where noise reduction along the chain pays audible dividends.
This is where your assertions really start to completely diverge from the actual facts and for more than just one reason!
Firstly as mentioned, jitter noise is the result of jitter during the conversion process. So obviously, there is no jitter noise before the point of conversion and therefore there CANNOT be any benefit to “
noise reduction along the chain” before the DAC because there isn’t any jitter noise to remove! There is however a benefit to reducing jitter before the data hits the DAC chip because jitter in the bit rate (NOT sample rate) can affect data integrity. All DACs have always done this and is why, as you state, jitter “
does not affect the integrity of the data”!
Secondly, the audibility of jitter noise/distortion has been known about since before digital audio was first implemented commercially 70 years ago, the human thresholds studied and even published publicly half a century ago and further studied up to about 25 years ago, by which point there was nothing left to research. With music, the human threshold is around 200 - 500 nano-secs. Using a test signal specifically designed to maximise audibility, the threshold is 3 nano-secs. A handful of music recordings were found that contained properties similar to that test signal and the lowest threshold anyone managed to attain was 27 nano-secs. Compare that with a survey of 50 cheap DACs in the mid 1990’s (cheap OEM CD and DVD drives and players, early HD ready TVs, digital TV cable boxes, etc.) which averaged around 145 pico-secs of jitter, that’s around 200 times below the threshold of audibility of that handful of music recordings and over 1,000 times below the threshold for all the millions of other music recordings. Therefore, if there really are “
audible dividends” (to reducing jitter), then you MUST be claiming that modern audiophile DACs are performing jitter reduction over a thousand times worse than the cheapest consumer DA converters from the mid 1990’s!
It’s just another example of a common audiophile marketing tactic, take something that was an issue many decades ago, omit the fact that nearly 3 decades ago it wasn’t only more than solved but more than solved even in the cheapest consumer products and then sell the audiophile community an expensive solution to this non-issue! And in this particular instance it’s even funnier/worse because an external master clock does not actually reduce (sample rate) jitter, in the best designs it makes no difference and in other designs it increases the jitter noise/distortion!
Things like a fibre link to break noise transmission work very well, as do ethernet filters like DXE and reclockers like the EtherREGEN.
Again, there is no jitter noise to “break”, jitter noise only exists in the signal AFTER conversion to analogue! There is other transmission noise/interference but that is massively reduced by common mode rejection and then eliminated entirely at the point of data buffering, which occurs in every switch, router and DAC. So filters reducing transmission noise that is going to be entirely eliminated anyway obviously CANNOT make any difference whatsoever. Again, if it were possible for this transmission noise to accumulate/propagate beyond each router or switch, the internet simply would never work, as your data has pass through numerous switches/routers and be transmitted through hundreds/thousands of kilometres of cable!!
They represent a substantial system upgrade with a significant change in presentation focus, detail and slam.
Yep, that’s why music and audio downloaded over the internet never has even the slightest hint of “
focus, detail or slam”, it’s traveled down thousands of kilometres of cable which is never audiophile cable and wasn’t “burnt in”!
G