Head-Fi.org › Forums › Equipment Forums › Dedicated Source Components › Why is SPDIF better than USB?
New Posts  All Forums:Forum Nav:

Why is SPDIF better than USB?

post #1 of 121
Thread Starter 

Why is SPDIF better than USB?  Aren't they both digital signals?  Why would one be an improvement over the other?

 

I apologize if this has been discussed (seems like an obvious topic), I just can't find it.

 

I've seen DAC's that have USB and SPDIF inputs, and everyone says the SPDIF input sounds much better, and I've seen USB to SPDIF converters.... so why not just do that inside a USB DAC if it is such an improvement?  And why is it an improvement?  Why would one lossless digital signal sound better than another?

 

Thanks!


Edited by baglunch - 9/14/10 at 4:45pm
post #2 of 121

Lately people seem to feel that USB to SPDIF is better than both USB or SPDIF

 

Reason being USB can be Asynch'd which is supposed to be better that un-Asynch'd USB or standard Spdif.  Although sound wise people prefer Spdif over un-Asynch'd USB.

 

So by default the best of both is Asynch'd USB converted to Spdif

post #3 of 121
Thread Starter 

....what?  And why?

 

I don't understand what you mean by asynched USB vs un-asynched USB, or standard Spdif, nor why people would prefer one over the other.

 

All I understand at this point is that many DACs will have both a USB input and a SPDIF input, but everyone loves the SPDIF input on the same DAC much better than the USB one.  What's different about the signal itself that makes SPDIF better, or why is it treated better inside the DAC?  Or, why don't DAC convert USB to SPDIF internally without needing to buy a separate converter?  I'm hoping to get as many of those questions answered as possible, for anyone that knows...

post #4 of 121
post #5 of 121
Thread Starter 

Nice link, thanks!

post #6 of 121
Thread Starter 

Looks like the takeaway from that article is that neither is inherently better than the other, it all depends on implementation... but why are they even different from each other?  Aren't they both digital?  Just 1s and 0s?  And being 1s and 0s, shouldn't they be identical to the DAC regardless what the plug used looks like?  Please note I'm not trying to compare USB DACs vs SPDIF DACs... but only DACs that have both kinds of inputs (like the Gamma 2, and many many others).  Why would one input sound any different from the other?

post #7 of 121

 

Quote:

Why is SPDIF better than USB?  Aren't they both digital signals?  Why would one be an improvement over the other?

 

 

It isn't. The myth arose in the past due to a misunderstanding and once these ideas get a head start they are difficult to eradicate as there are vested interests at stake.

 

They are both digital signals so if you were doing a simple copy or file transfer either method would give you bit perfect copies. However audio is streamed to a DAC and there is a timing component involved. Apart from the usual error correction shenanigans the audio samples must be both written and read very accurately (44.1k x a second in the case of CD quality). Different clocks can be slightly out of time both internally and with reference to others. This is essentially the notorious and controversial jitter ( provided you believe you can hear the effect in the first place.)

 

Audiophile quality DACs are generally made using the cheapest possible components in a nice looking box. The USB implementations were suspect due to real or imaginary timing issues because they used early generation (CD) chips that didn't require the manufacturer to buy or write specialised software. S/PDIF contains the timing information based on the source when it is formatted. Therefore some people preferred  to buy a new USB > S/PDIF device such as the Hi-Face so they could continue to use their obsolete DAC. This has become something of a sick joke now as it appears that as soon as the manufacturers felt it safe to do so they started putting an inferior clock in the Hi-Face and hoped no one would notice.

 

However if you choose to buy a modern pro-am grade audio interface they contain 2nd generation USB chips with advanced timing under program control. These are slightly more expensive for the manufacturer to buy and implement but will give you at least as good if not better (provided you have the attitude,  ears and gear to tell) result as a Heath Robinson USB 1.1 + S/PDIF hybrid mash up.

 

post #8 of 121

 

Originally Posted by baglunch View Post

Why is SPDIF better than USB?  Aren't they both digital signals?  Why would one be an improvement over the other?

[..] 

I've seen DAC's that have USB and SPDIF inputs, and everyone says the SPDIF input sounds much better, and I've seen USB to SPDIF converters.... so why not just do that inside a USB DAC if it is such an improvement?  And why is it an improvement?  Why would one lossless digital signal sound better than another?

 

Many reasons actually [:ooterreuroo]

 

-<$1K USB DAC's boil down between the crummy sounding PCM270x USB controllers(16/48 max) and more recently the Tenor chip(24/96 max) but those 2 chips carry an horrid jitter

 

-until very recently, you couldn't get galvanic isolation over USB...the ADUM4160 chip has come to save the day, and many companies sell it in dongles now. Most serious S/PDIF interfaces provide galvanic isolation over their coax output(using pulse transformers) and toslink being a light signal it's also fully immune to electric interferences. A proper galvanic isolation ensures that your computer dirty ground doesn't reach your audio gear.

 

-There's many ways to reclock S/PDIF in order to obtain a lower jitter, the WM8804(50ps) and CS2000(75ps) chips come to mind

 

The very best solution is to slave the transport to the DAC, but the DAC's offering this option are uber-pricey...basically the DAC has its own clocking and slaves the computer to it via a discrete clock signal.

 

The shortcomings of:

-USB are jitter and galvanic isolation

-S/PDIF are explained here: http://www.gearslutz.com/board/so-much-gear-so-little-time/172143-spdif-vs-word-clock-question.html

 

"S/PDIF is a horrendously poorly designed interface. This is because it combines the clock and audio coding onto the same signal. The receiver is supposed to recover the clock from this signal as well as extract the audio data. This turns out to be a non-trivial task, and one that almost always leaves the recovered clock contaminated with signal correlated jitter artefacts."

 

Some companies sell async USB controllers that would supposedly fix the jitter problem, but it's very rare that they also provide galvanic isolation....also some other companies call bs on the async arguments(click on "Design Philosophy"): http://www.centrance.com/products/dacport/

 

"Some manufacturers may lead you to believe that Asynchronous USB transfers are superior to Adaptive USB transfers. This no more true than saying that you "must" hold the fork in your left hand."


Edited by leeperry - 9/15/10 at 6:54am
post #9 of 121

With the normal disclaimers...

 

What we need here is someone with a DAC with both inputs with a really fast selector switch and two similar PC sources (with wireless disabled) running the same version of FooBar with identical files perfectly time aligned, one with USB and one with SPDIF and a hand on the source selector switch and eyes closed

 

Guess what....

 

 

I just tried this, repeatedly over a long period admittedly this is not a real blind test but the transition from one to another is utterly seamless and they are completely identical to me, same volume level, same LR balance, same frequency balance, and stay perfectly aligned over a large number of tracks

 

 

But someone could do this test much better.

 

Frankly I think they are both functionally identical in my system to my ears and so on...

 

FTR I used 2 dual core pentium machines both with 4GB ram, both running Vista and FooBar and the DAC/headphone amp is a 2009 model Zero 24/192 with OP627, the headphones are Sennheiser HD535 but I have better if that is an issue, but seriously there is zero difference to me and the switch is as near instantaneous as possible.

 

Just my 2c anecdote


Edited by nick_charles - 9/15/10 at 1:19pm
post #10 of 121
Thread Starter 

Off on a tangent: Why stream at all?  Would a bit of buffering take care of any jitter?  Storage is so cheap and fast nowadays, would it make sense to buffer the next second, 10 seconds, entire song, whatever?  I can see why real-time would be useful to anyone creating or manipulating music, but for listening, I don't see the need for streaming if it creates these complications.  As a listener, I would accept the delay as "warm up" if you like.  If jitter is caused as a result of the pieces arriving at different times (like in Tetris), why not just wait until you have enough usable pieces to avoid it?

 

Back on topic: So if one is truly not superior to the other, why are there different results within the same DAC that supports both?  Shouldn't the output sound the same?  The DAC I'm currently looking at is the Gamma 2 (by head-fi'ers amb and MisterX) and it supports both USB and SPDIF inputs but everyone greatly prefers the SPDIF inputs.

 

So if I want those SPDIF results, I'd need to get a USB to SPDIF dongle, which got me to wondering about if the signal is coming from USB to begin with, what does the dongle do that magically makes it this wonderful SPDIF signal, and why can't it just be done in the DAC and use fewer connections and cords?  Everyone has USB on their computers nowadays, whether it's a netbook, laptop, or fullsize rig.  No one has SPDIF unless they buy a sound card or dongle specifically for it.  So why not only make USB DACs and do the USB to SPDIF inside the DAC.  It can't cost more than the price of the "Musiland Monitor 02 USD (~$120 new)" or the "M2Tech HiFace USB Interface", right?  And that would still keep it well under the $1K figure you mentioned.  Or, if it would cost dramatically more, would it wrong to simply include one of those into the design of the DAC and just put it straight inside the case, hardwired to the rest of the DAC?  Haha, looks like I've strayed back onto the tangent of creating a better DAC rather than just asking about the differences between interfaces.  I have no electrical background or I would try it myself just to see.  Or maybe if I did, I'd understand why you can't.  " class="bbcode_smiley" height="" src="http://files.head-fi.org/images/smilies//smily_headphones1.gif" title=":)" width="" />

 

Any thoughts on any of this?  If you think you've already answered some of my questions, please chalk it up to my not understanding what you are talking about with "galvanic isolation" or any of the audio chips you mentioned, etc.  I'm sure if this thread gets read by anyone outside the knowledgable responders thus far, any additional exposition will be greatly appreciated.

 

Update: thanks, nick for the test with your Zero.  I wonder why people like SPDIF on the Gamma 2 more?  Is it placebo?  Groupthink?


Edited by baglunch - 9/15/10 at 2:08pm
post #11 of 121

One technical reason in favour of S/PDIF on the Gamma2 (and numerous other DACs) is that it supports a higher sampling and bit rate of 192KHz / 24-bits. The USB receiver chip only goes up to 48KHz / 16-bits. Now whether or not the higher rates translate to actual audible improvements is a different question.

 

In the case of the Gamma2, a higher rate USB receiver chip (like the 96KHz Tenor chip) was not used because it is not DIY friendly. Requires special firmware programming or some such. There is also the issue of space constraints.

 

Toslink S/PDIF also has the advantage of providing galvanic isolation. No dirty power or analog electrical noise from the PC can cross the fiber optic cable to the DAC. This is my preferred method because Toslink is already built in to my Mac.


Edited by Yoga Flame - 9/15/10 at 1:47pm
post #12 of 121



 

Quote:
Originally Posted by baglunch View Post

 

 

Update: thanks, nick for the test with your Zero.  I wonder why people like SPDIF on the Gamma 2 more?  Is it placebo?  Groupthink?


I can only comment with confidence on my experience and my system, but I could speculate on the nature of sighted non time-aligned tests in another place, perhaps the implementations are different.

 

One thing I will mention in passing. As well as my other sources I have one digital (SPDIF) source , a WD HDTV,  which outputs a digital signal that is **verifiably** louder than the full scale spdif from my computer, I would not have believed it if I had not measured it myself (average 1db) , the extra boost causes clearly audible clipping on hotly recorded tracks.

 

My only gripe with USB is that on older PCs especially under-specced PCs or running wireless it can be blippy, my current desktop however is man enough for the job...


Edited by nick_charles - 9/15/10 at 2:08pm
post #13 of 121

 

Originally Posted by nick_charles View Post

 

I just tried this, repeatedly over a long period admittedly this is not a real blind test but the transition from one to another is utterly seamless and they are completely identical to me, same volume level, same LR balance, same frequency balance, and stay perfectly aligned over a large number of tracks

 

using toslink? I also did some real world experiments: my Firestone Bravo S/PDIF transporter uses a WM8804 chip to reclock S/PDIF to 50ps. When feeding my Firestone Spitfire DAC, a 15cm homebrew 75Ω tinned copped coax cable sounded noticeably tighter and clearer than this 65 strands glass toslink 6ft cable. Both were running off linear regulated PSU's and the Bravo is using a Tenor chip, which I filter w/ an ADUM4160 dongle.

 

Toslink requires 2 light conversions...m2tech said that toslink was unmanageable jitter-wise when releasing their hiface, it's in their FAQ. Very often you can see 700/800ps jitter mesurements for toslink, but even the best S/PDIF receivers(such as DIR9001) only provide a 50ps clock recovery.


Edited by leeperry - 9/15/10 at 2:54pm
post #14 of 121

 

Quote:

One technical reason in favour of S/PDIF on the Gamma2 (and numerous other DACs) is that it supports a higher sampling and bit rate of 192KHz / 24-bits. The USB receiver chip only goes up to 48KHz / 16-bits. Now whether or not the higher rates translate to actual audible improvements is a different question.

 

 

This is a restriction placed on the device in the economic interests of the manufacturer. It's not down to USB. For example my MOTU Ultralite will happily input and output up to 16 simultaneous tracks at 192khz 24 bit via either USB or Firewire. It will also handle S/PDIF I/O at up to 96kHz at the same time.

post #15 of 121

Cosmic radiations, solar activity and dark energy can also have a detrimental effect on jitter, I found. That why I prefer the  sound of WB7584x chip through  selenium cable. I somehow find it just more musical and detailed. Just my two cents....

New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Dedicated Source Components
Head-Fi.org › Forums › Equipment Forums › Dedicated Source Components › Why is SPDIF better than USB?