On HDMI, USB and Firewire super cables
Apr 6, 2010 at 12:18 PM Post #32 of 46
Quote:

Originally Posted by slim.a /img/forum/go_quote.gif
There are many articles talking about async vs. isochronous usb for audio.
As I said earlier most usb audio converters use the adaptive/isochronous method for streaming real time audio.

I did a quick research on google, and here is what I have found:

How USB Works
With isochronous data it is not possible to retry a failed transaction. Since only one ‘slot’ is allocated to the pipe during each frame, resending the data would delay transmission of the succeeding data samples, upsetting the time element of the data delivery. Consequently no handshake packet is sent and the data must be accepted ‘as is.’

Here is another interesting article that I found about usb audio on quick search:
6moons audio reviews: Wavelength Audio Brick USB DAC

You will see that jitter is a real problem in developping usb audio. That is why for example the PCM270x chips are limited to 3 frequencies (32kHz, 44.1kHz and 48kHz), it is mainly to reduce jitter.

So by deductive reasoning, you can understand that using a short and high quality usb cable will minimize data losses and jitter for isochronous devices.
Async usb devices are immune to the usb cable if they are properly designed.



I had also done a search on google, and read multiple articles, some more in-depth than the ones you linked. I'm sorry to say this, but your deductive reasoning relies on some incorrect assumptions.

Adaptive or async, it is always isochronous. That first article you linked even says, "All transfers take the form of packets, which contain control information, data and error checking fields." It may not be possible to retry a failed transmission, but there are still error checking measures in place to prevent a failed transmission from occurring.

Yes, jitter is a problem in developing USB audio, but this is because of the inconsistency of the computer's clock, which cannot be fixed by a cable. The jitter is introduced at the USB device when it has to adapt to this clock (in adaptive mode), not caused by the cable. None of the articles I read on this subject (including the ones linked by you and others) even mention the quality of the USB cable. It is all about the design of the device.
 
Apr 6, 2010 at 12:56 PM Post #33 of 46
Quote:

Originally Posted by froasier /img/forum/go_quote.gif
I had also done a search on google, and read multiple articles, some more in-depth than the ones you linked. I'm sorry to say this, but your deductive reasoning relies on some incorrect assumptions.

Adaptive or async, it is always isochronous. That first article you linked even says, "All transfers take the form of packets, which contain control information, data and error checking fields." It may not be possible to retry a failed transmission, but there are still error checking measures in place to prevent a failed transmission from occurring.

Yes, jitter is a problem in developing USB audio, but this is because of the inconsistency of the computer's clock, which cannot be fixed by a cable. The jitter is introduced at the USB device when it has to adapt to this clock (in adaptive mode), not caused by the cable. None of the articles I read on this subject (including the ones linked by you and others) even mention the quality of the USB cable. It is all about the design of the device.



Allow me to rephrase what I previously said: A poor quality or long usb cable will increase jitter. A short high quality usb cable won't cause as much jitter. In both cases there will be line induced jitter, but in the case of a short high quality usb cable, it would be less than in the case of a longer poor quality usb cable.

While I didn't measure the jitter myself, I tried different usb cables on the same converter. Well on average quality cables, I could not set the latency settings as low as I was able to do with an "audiophile" wireworld ultraviolet cable. While the 2 ms setting would work with the Wireworld without crackles and pops, the same converter had drop-outs problems with a regular belkin cable and the stock cable.

At the time I didn't fully understand the origin of the problem. However, it seems that by using a very short usb cable (0.5m or less for example) one can minimize the importance of the quality of the cable.
Of course, when I set the latency at 20ms, I didn't have any problems with any of the cables.

So my recommendation: even if you don't understand the theory behind line induced jitter and reflections in digital cables, keep the usb cable as short as possible, it will minimize its effect.
If you care to read the following research paper (page 17: http://www.iet.ntnu.no/courses/fe811...t_audiodac.pdf), you will see that when measuring digital jitter, a short digital cable is a pre-requisite.

Of course, all depends on the level of accuracy you are trying to achieve. If it is just to stream 16 bits data, high level of jitter are acceptable. If the goal is however to achieve a true 20 bits resolution system, then such things as line induced jitter start to matter. Again, I invite you to read the paper I linked above. It is good introduction to jitter in DACs. What I am doing is only simplifying what is a very complex issue. It takes pages and not lines to explain that.
 
Apr 7, 2010 at 12:50 AM Post #34 of 46
Quote:

Originally Posted by slim.a /img/forum/go_quote.gif
Allow me to rephrase what I previously said: A poor quality or long usb cable will increase jitter. A short high quality usb cable won't cause as much jitter. In both cases there will be line induced jitter, but in the case of a short high quality usb cable, it would be less than in the case of a longer poor quality usb cable.

While I didn't measure the jitter myself, I tried different usb cables on the same converter. Well on average quality cables, I could not set the latency settings as low as I was able to do with an "audiophile" wireworld ultraviolet cable. While the 2 ms setting would work with the Wireworld without crackles and pops, the same converter had drop-outs problems with a regular belkin cable and the stock cable.

At the time I didn't fully understand the origin of the problem. However, it seems that by using a very short usb cable (0.5m or less for example) one can minimize the importance of the quality of the cable.
Of course, when I set the latency at 20ms, I didn't have any problems with any of the cables.

So my recommendation: even if you don't understand the theory behind line induced jitter and reflections in digital cables, keep the usb cable as short as possible, it will minimize its effect.
If you care to read the following research paper (page 17: http://www.iet.ntnu.no/courses/fe811...t_audiodac.pdf), you will see that when measuring digital jitter, a short digital cable is a pre-requisite.

Of course, all depends on the level of accuracy you are trying to achieve. If it is just to stream 16 bits data, high level of jitter are acceptable. If the goal is however to achieve a true 20 bits resolution system, then such things as line induced jitter start to matter. Again, I invite you to read the paper I linked above. It is good introduction to jitter in DACs. What I am doing is only simplifying what is a very complex issue. It takes pages and not lines to explain that.



That paper is talking about S/PDIF, which is very different from USB in this context. In a USB interface, the problematic jitter is generated within the device, not by the cable.

Latency is a separate issue altogether. The drop-outs/noise you experienced are caused by the device's buffer running out (lower latency is achieved by using a smaller buffer)--A better/shorter cable may improve buffer performance, but as long as the buffer isn't running out (as with your 20ms setting), sound quality is as good as it's going to get without improving something other than the cable.
 
Apr 7, 2010 at 4:48 AM Post #35 of 46
Quote:

Originally Posted by froasier /img/forum/go_quote.gif
That paper is talking about S/PDIF, which is very different from USB in this context. In a USB interface, the problematic jitter is generated within the device, not by the cable.

Latency is a separate issue altogether. The drop-outs/noise you experienced are caused by the device's buffer running out (lower latency is achieved by using a smaller buffer)--A better/shorter cable may improve buffer performance, but as long as the buffer isn't running out (as with your 20ms setting), sound quality is as good as it's going to get without improving something other than the cable.



With Isochronous/adaptive mode usb, the computer acts as the master clock. So jitter is generated at the computer, can be increased by the usb cable, and by the usb device. I remember Dan Lavry saying something about reflections that can happen in USB cables as well.

So since the jitter is generated at the source (computer) I don't see why the usb cable wouldn't affect at all the signal before getting to the usb device. It just doesn't make sense at all.

The test I generated (dropouts) wasn't for fun. It happened progressively.
First I noticed accidently there were differences in sound between usb cables.
Second, I noticed that with some cables, I could set the latency lower than with others.
Third, I conducted the test to confirm my suspicions... And surprise, the best subjectively sounding one (the wireworld ultraviolet) was also the one that allowed the set the latency the lower.

So either it is one big/huge coincidence, or it just means that when using 2m usb cables (like I did), and in certain test conditions, the quality of the usb cable will affect the sound (with adding more jitter) and by isolating the signal lines from the power lines.

Anyway, given the price of high quality usb cables, they are not worth it. An well built device such as the m2tech Hiface solves the problem and doesn't even require a usb cable. That is one of the reasons I bought it and start worrying about usb cables.
 
Apr 7, 2010 at 4:21 PM Post #38 of 46
Quote:

Originally Posted by slim.a /img/forum/go_quote.gif
You should perhaps also read this :

cMP² | CMP / 03Jitter



My "gut" feeling tells me its BS. You can survive using a 512 MB RAM XP computer as your music server. As long as you "streamline" the computer (remove bloat software and processes) it will do fine. 4 or 8 GBs of RAM is just overkill for music playback.

But hey, I'm not using stats to back up everything so I can't talk about the article too much. I don't see people here buying RAM like crazy for music.

My local Ayre dealer uses a Vista OS with 2 GB of RAM on a laptop in his store and it sounds just fine on the Ayre DAC.

If I'm going off topic or I have no idea I'm talking about then so be it.

Edit: It would be great if I can use a cheap Linux based computer as a music server.
 
Apr 7, 2010 at 4:33 PM Post #39 of 46
Quote:

Originally Posted by HyperDuel /img/forum/go_quote.gif
My "gut" feeling tells me its BS. You can survive using a 512 MB RAM XP computer as your music server. As long as you "streamline" the computer (remove bloat software and processes) it will do fine. 4 or 8 GBs of RAM is just overkill for music playback.

But hey, I'm not using stats to back up everything so I can't talk about the article too much. I don't see people here buying RAM like crazy for music.

My local Ayre dealer uses a Vista OS with 2 GB of RAM on a laptop in his store and it sounds just fine on the Ayre DAC.

If I'm going off topic or I have no idea I'm talking about then so be it.

Edit: It would be great if I can use a cheap Linux based computer as a music server.



What those people are talking about is not surviving or getting the bare minimum. As you must know, Ayre sells very expensive (and good performing) equipment. What they are looking for is to squeeze the last drop of performance out of the system.

If you look at the price of some high end cd transports, they can reach insane amounts, so for those people who have very transparent and revealing gear, buying 8gb of RAM could represent a good value a slight increase in performance.

Personally, I wouldn't do it, but I understand if someone does.

As for Linux, I was told that it could be set up in a way to give real time priority to streaming audio. It would be nice if a company came up with small Linux based notebooks optimized for streaming music. As long as it is not too complicated to use, there could be a real demand for such a product.

Anyway, we are getting a little bit off topic. Let's go back to those "super cables".
 
Apr 7, 2010 at 4:54 PM Post #40 of 46
Quote:

Originally Posted by slim.a /img/forum/go_quote.gif
What those people are talking about is not surviving or getting the bare minimum. As you must know, Ayre sells very expensive (and good performing) equipment. What they are looking for is to squeeze the last drop of performance out of the system.

If you look at the price of some high end cd transports, they can reach insane amounts, so for those people who have very transparent and revealing gear, buying 8gb of RAM could represent a good value a slight increase in performance.

Personally, I wouldn't do it, but I understand if someone does.



I can understand that but I just think you are throwing money down the drain. You don't need much hardware to get the best of the experience.

Quote:

As for Linux, I was told that it could be set up in a way to give real time priority to streaming audio. It would be nice if a company came up with small Linux based notebooks optimized for streaming music. As long as it is not too complicated to use, there could be a real demand for such a product.


Oh yes I would buy one in a heartbeat.
 
Apr 7, 2010 at 6:52 PM Post #41 of 46
the cMP coder has made a fantastic tutorial here: http://photos.imageevent.com/cics/v0...rts%20v0.3.pdf

a true eye opener for me
eek.gif


when I boot my XP box, I've got 13 resident apps, all running in low prio on single cores...and I run my media players in high priority on the 4 cores.

I've also modified the NT core affinity system so high prio gets 36X more cycles than low...works like a treat, as snappy as can be
smily_headphones1.gif


Of course, he's over the top...take everything w/ a grain of salt...it's about making your A/V apps far more important than your average system process, and it does pay up in the end! he also explains how to get a much shorter timer granularity, which helps a lot for multimedia.

thoppa measured that the 12V ripple on his ATX PSU was going through the roof when his HDD was busy...it will relate to the SQ if your audio adapter feeds 12V, obviously.

OTOH, on that stereophile link they say "Less than 8GB will work, but won't sound as good"...this is complete nonsense
biggrin.gif


my XP box needs 350MB when it's freshly started, and nothing >400MB when I play audio files(I've got 2GB of DDR2 and I've disabled my pagefile)
 
Apr 9, 2010 at 2:02 AM Post #43 of 46
Quote:

Originally Posted by froasier /img/forum/go_quote.gif
There is not a "1.4 cable" per se, but 1.4 supports ethernet and audio return channels, which require a souped-up cable specified in the 1.4 standard. These new cables, instead of being labeled by version number, will be labeled simply by speed and whether they have the ethernet support.

I haven't seen the Panasonic 3D player. Regardless, very few people need such a player as I explained above, and the 3D support is the mess, not HDMI versions. Even if you do need two HDMI cables, what's the big deal? It still beats the old four-cable system of component+S/PDIF.



The speed on varying HDMI cables would just confuse joe six pack even more.. Best Buy has now taken away the glasses on their demo's.. Either people are breaking them or stealing them.. Now you must ask a blu shirt for a pair to view the 3D demo..
 
Apr 9, 2010 at 7:25 AM Post #44 of 46
Quote:

Originally Posted by kool bubba ice /img/forum/go_quote.gif
The speed on varying HDMI cables would just confuse joe six pack even more.. Best Buy has now taken away the glasses on their demo's.. Either people are breaking them or stealing them.. Now you must ask a blu shirt for a pair to view the 3D demo..


There are only two speeds, so it shouldn't be too bad. A much more straightforward explanation than version numbers. Sears here has the glasses, although one pair of the two was broken from the beginning so they only have one. It's tethered to the TV with its charging cable.
 
Apr 21, 2010 at 6:12 PM Post #45 of 46
Quote:

Originally Posted by slim.a /img/forum/go_quote.gif
With Isochronous/adaptive mode usb, the computer acts as the master clock. So jitter is generated at the computer, can be increased by the usb cable, and by the usb device. I remember Dan Lavry saying something about reflections that can happen in USB cables as well.

So since the jitter is generated at the source (computer) I don't see why the usb cable wouldn't affect at all the signal before getting to the usb device. It just doesn't make sense at all.

The test I generated (dropouts) wasn't for fun. It happened progressively.
First I noticed accidently there were differences in sound between usb cables.
Second, I noticed that with some cables, I could set the latency lower than with others.
Third, I conducted the test to confirm my suspicions... And surprise, the best subjectively sounding one (the wireworld ultraviolet) was also the one that allowed the set the latency the lower.

So either it is one big/huge coincidence, or it just means that when using 2m usb cables (like I did), and in certain test conditions, the quality of the usb cable will affect the sound (with adding more jitter) and by isolating the signal lines from the power lines.

Anyway, given the price of high quality usb cables, they are not worth it. An well built device such as the m2tech Hiface solves the problem and doesn't even require a usb cable. That is one of the reasons I bought it and start worrying about usb cables.



Again, adaptive or async, it is always isochronous. The jitter generated at the computer isn't passed directly to the audio stage--it only affects the (adaptive mode) USB device's clock every 1ms. I don't think (and still have not seen any sources claiming, or even suggesting) that the cable will affect this significantly.

You didn't mention them sounding different (at normal latency settings) before. You could describe the differences.

Alas, your final point sums up the topic pretty well: "...high quality usb cables, they are not worth it. Any well built device... solves the problem."
 

Users who are viewing this thread

Back
Top