If Schiit was a purely subjective company, the dilemma of having a product that made stuff sound better, without having any rational explanation as to why, wouldn’t be a dilemma at all. We’d wrap it up in nice flowery language, throw in some pseudo-meaningful charts that showed the difference in power supply noise levels, and call it a day.
If Schiit was a purely objective company, the dilemma might not be a dilemma at all. Because we might have convinced ourselves that, even though there was a difference, there really was no difference, and so why bother making something that didn’t make a difference?
But as a company that uses both objective measurement and subjective listening, it’s not so clear. We could do the pure subjective thing with the words about how you’re transported in space and time to a wonderful world where unicorns dance and crap like that. Sure. We could.
But that isn’t us.
And that isn’t honest. Because, you know what? We’re really talking about small differences here. It might not be important to a lot of people. It can be easily dismissed.
But for other listeners, it might be big enough to be significant.
Not to spend too much time on the "objective" side of this debate, but I'm pretty sure any of the relevant USB packets carry a CRC block. In theory, the bits are being sent along perfectly or they are not. If a bit was flipped, the CRC would indicate this. The odds of a bunch of random bit failures that would also pass a similarly randomly failed CRC get into the statistically irrelevant range (I haven't checked the math on this, but feel pretty confident about it). Essentially, there should not be a scenario where the bits being pulled from a USB packet and put into some chip are wrong.
"When the going gets weird, the weird turn pro." - Hunter S. Thompson
I don't know why I put that quote up, just seemed appropriate as long as we're talking Wyrd Schiit.
OK, so how does Wyrd wind up affecting jitter (not much, I understand - Jason, do you have actual figures on how much?) through an async USB input, nearly the entire purpose of which is to keep anything prior to the DAC's clock from affecting jitter? Here's my Wyrd speculation:
First: atubbs is right, this isn't about flipped bits. If it happened, believe me, you'd hear it. Like a scratch on a record - a quick, nasty "tick." We're not talking subtleties in the audio presentation, we're talking "Damn!"
Nope, let's look at the DAC *at* and *after* the clock, because with async USB nothing *before* the clock should matter. First of all, electrical noise *at* the clock may affect the clock's operation, causing jitter. And second, *after* the bits are clocked out of the buffer, what happens? The bitstream is evaluated to determine whether each bit is a 1 or 0. The answer to that depends on whether the signal does or does not exceed a specified value. And how is that value determined? With respect to ground. Ground is the zero point.
So let's say there's noise on ground. Then the signal isn't being compared to an actual zero, but to a value above that, so the signal is seen as just slightly lower level than it actually is. Let's say the amplitude of the wave representing the bitstream is rising. The point at which it reaches the crossover point from 0 to 1 will be delayed ever so slightly - jitter. Let's say the wave is falling. The point at which it reaches the crossover point from 1 to 0 will be accelerated ever so slightly - jitter again.
There's an actual EE (which I definitely am not) named John Swenson, who's written some stuff about this that may actually be cogent (which mine definitely isn't). But at least if I'm understanding John correctly, this may be what Wyrd helps to correct.
Glad as always to be corrected if what I've said turns out to be very, very wrong.