What is clear is it is a complicated space with many variables, including the jitter performance / characteristics of:
- the external clock and DAC (the inherent jitter characteristics of their oscillators and their levels of RFI and power supply noise which may mess with this)
- the full transmission line that connects one oscillator to the other (incl the Clock’s internal circuitry from sine-square wave conversion, if applicable, through the clock BNC jack/cable BNC, the cable itself with its particular internal reflectivity due to its conductor and dielectric materials along with its shielding characteristics, a second BNC junction with plugs and jacks likely made of different materials in an imperfect mechanical and conductive union, then the DAC’s clock receiving circuit and/or square/sine conversion)
- and last but far from least the performance of the PLL or clock synthesiser In the DAC. Anecdotally at least the K2 adds hardly any jitter in using a master clock signal to synthesise the clock frequencies used by the DAC. Whereas historically it seems this synthesisation step did add a lot of jitter leading to generalisations that external clocks can only add jitter, which may have been truer then but may now be outdated.
I should also note there are different types of jitter where some maybe more audible than others due to their nature and/or their levels relative to the audio signal. With phase noise the important thing to keep in mind is it’s a decibel measure relative to the carrier signal, at a specified frequency offset, essentially measuring the volume offset, an X,Y offset coordinate if you will. The carrier signal in the case of the master clock is the clock signal at 10 MHz set at a fixed maximum amplitude, or in the case of music it is each frequency of a note I.e. a hundred or a thousand carrier signals across the frequency spectrum of the music (though apparently psycho acoustic models collapse this down into a smaller number of buckets of frequencies that we perceive distinctly). As the volume level of the music continuously varies and is rarely at maximum there is a suggestion by some folk (incl Amir of you know where) that the difference between phase noise beyond/between say -70dBc and lower levels (eg -120dBc) is very unlikely to be audible because music is only rarely at or near peak levels that would allow sufficient dynamic range/SNR for jitter level differences to be fully displayed. That has some intuitive appeal to me however I’m not sure I quite accept this, I feel it is yet another simplification, as it doesn’t align to my experience with multiple clocks, DACs and DDCs, or that of many others who find worthwhile/dramatic audible improvements as they compare two expansive clocks with sub -115dBc/1hz phase noise only a few dBc apart. Having said that this logic may still contain some truths that are relevant here and help explain the variability in audible difference.
My point - and sorry it took me a while to get to it - is DACs and external clocks are a pretty damn complex system where hard generalisations are going to be wrong as often as they’re right, but where some rules of thumb may help us users stack the odds in our favour for getting a good result.
I pity the poor new R26 owners landing on this thread looking for advice on how to set it up…