landroni
500+ Head-Fier
- Joined
- Dec 1, 2014
- Posts
- 659
- Likes
- 335
Regarding the electrical noise thing, I get the concept and have dealt with this issue in the past but don't currently have those issues to any discernible degree in my systems. Even so, I've given thought to buying a Wyrd to see if it could make an audible difference; for $100, why not? Hmmm, maybe some Nordost cables would help as well?!
There are good reasons why a Wyrd would help, and some even suggest to put two Wyrds in the data path (but not more). This said, I for one would ditch USB on principle: people can spend outrageous amounts just to "fix" USB and get performance similar to other transports, whereas e.g. optical (with a high-quality glass cable like Lifatec) won't suffer from electrical noise at all. So anything from a USB -> SPDIF converter to a RPi + Digi+ or even streamers like a Sonicorbiter SE or Aries Mini would likely do better than a generic computer with USB out. In particular RPi + Digi+ is a very low-cost solution to experiment with, if you've got the time to figure it out and set-up.
While many sceptics will heartily argue that source can't matter (unless something egregious is going on), there are engineers around head-fi who will disagree, for instance:
http://www.head-fi.org/t/603219/schiit-gungnir-dac/3675#post_12606468
Regarding timing/jitter errors, isn't this what 'Asynchronous' is supposed to fix -- assuming, of course it's well implemented?
Now on Error Correction...this actually sounds like there could be differences that might be discernible. That said, it should also be easily measurable (bits in = bits out) between brands/models of PC's/components. If this were the case, then it would make sense to choose a specific PC brand/component.
While SPDIF and USB Audio do no ECC, my understanding is that given the voltages used (0v and 5v) you need some really serious interference for bits to get flipped. If we're not playing buggers with cable length (e.g. >3m for USB cables, which would be btw outside spec, so disabuse yourself of any notion that "bits are bits" always holds), we can assume that bits do get intact to the DAC. Which leaves us with two possible confounding factors: jitter and electrical noise. I agree that it should be relatively easy to test bits in = bits out, but I would expect that very few have actually done any such tests in their systems.
That said, my understanding is that many DACs have bi-directional communication and error correction built into them to ensure that uncorrupted, bitperfect data transfer does occur. Some manufacturers tout this capability including Schiit...
"Advanced Clock Management, USB Input Standard -- Bifrost uses a sophisticated master clock management system to deliver bit-perfect data to the DAC—unlike many DACs that use asynchronous sample rate conversion (ASRC), which destroys the original samples. And, with our acclaimed Gen 2 USB input now standard, you’re ready for computer, tablet, and even phone-based sources."
I'm not sure what a DAC could do in terms of ECC if the protocol used (SPDIF or USB Audio) doesn't do ECC... Short of buffering, but then you can't do gapless, and buffering introduces its own source of noise within the DAC itself, and I can't say I've heard of many (any?) DACs doing their own buffering. I may be wrong.
As for the Schiit marketing, I would think Jason got overly enthusiastic in his marketing lyrics and ended up conflating two things. When talking about clocks, we're not talking about bit-perfect, we're really talking about the timing accuracy of the bit stream (i.e. jitter). However, when Schiit touts "bit-perfect" MB DACs they don't mean the portion from source to the DAC (all DACs are supposed to get that once they use SPDIF or USB Audio), but instead they talk about what happens to the bits from DAC in to Analog out.
The ONLY bit-perfect DACs are NOS (non-oversampling) DACs: they use the exact same bits that were input to generate the analog waves.
Most oversampling DACs (i.e. most dedicated DACs out there) will use some form of approximation, most often based on Parks-McClellan, which will discard the original samples and replace them with approximations. Now we're talking about really, really good approximations here, but at the end of the day it's still what it is. One of the reasons oversampling was originally introduced was to work around the brickwall filtering requirements for 44.1 kHz material. Jason Stoddard says:
"Digital filters are where bit-perfect transfer usually dies. Digital filters upsample the incoming data to higher data rates (typically 8x) to reduce the need for analog brickwall filtering. This is handy, but again—what it outputs is a mathematical approximation."
Lastly you have a few oversampling DACs who claim to preserve the original samples, like Schiit MB, old Thetas (same tech, same designers) and Chord DACs (very different, Delta-Sigma-like tech and definitely not off-the-shelf*); since it's proprietary tech, no one can really tell. Schiit comes with a "closed-form digital filter that retains the original samples", which is one of their main selling points and why they won't stop plugging in a "bit-perfect" reference whenever they discuss their MB DACs.
See the Yggy FAQ:
"The math involved in developing the filter and calculating has a closed form solution. It is not an approximation, as all other filters I have studied (most, if not all of them). Therefore, all of the original samples are output. This could be referred to fairly as bit perfect; what comes in goes out."
For me the battle in technological advancements rages between Schiit and Chord, both departing from industry standards and using very, very different approaches. That's pretty fun to watch. Of course bar Mojo, Chord is practising a comedy pricing scheme, whereas for instance the optional stand for a DAVE (which reportedly slightly bests an Yggdrasil) will cost as much as an Yggdrasil itself... Schiit on the other hand is philosophically invested in tapping the value-minded segment of the audiophile market.
* For Chord I'm actually not sure they really do preserve the original samples, and I don't have a reference at hand. It should still, by definition, destroy the original samples. It just takes a different path to getting the correct output voltage for each incoming PCM sample. Rather than using successive-approximation, it performs a parallel value conversion and then puts the individual parts of that back together at filter-time. It should be more accurate than pure DS because you should always get the same output for the same input.
Only R2R DACs can do NOS (no digital filtering of any kind), and DS must do filtering for the simple reason that you can't squeeze 16-bit PCM through a 1-bit switch (even "multibit" DS chips simply consist of several 1-bit switches working together, usually 5 or 6). These standard DS D/A chips will generally convert even 1-bit DSD to their intermediary, internal format prior to converting the bit stream to analogue voltages: very few DACs out there do native playback of 1-bit DSD data, as in using just one very good 1-bit switch. And only R2R DACs can do native playback of PCM data (i.e. NOS).