Digital cable make a difference?
Sep 21, 2010 at 4:05 AM Post #76 of 90
If you say something doesn't exist, I think there's a subset of people who will then believe it, embracing it as their own domain.
 
Perhaps it's a mental reflex found in all of us to some degree.
 
It's the only way I can comprehend the duality of un-science in a world utterly dependant on science.
 
Sep 21, 2010 at 8:52 AM Post #77 of 90


Quote:
nick charles..what is a  1/8 to RCA leads exactly? is it a coax cable with a 1/8 plug in at the end instead of rca?
so do you prefer the optical vs. coax+adapter?



It is a cable wirth a mini-jack (3.5mm mono) at one end and a single RCA (phono) plug at the other, not very common but some portable dvd devices use them and Amazon sell one. Personally I would use optcal for simplicity.
 
Sep 21, 2010 at 9:22 AM Post #78 of 90


Quote:
It is a cable wirth a mini-jack (3.5mm mono) at one end and a single RCA (phono) plug at the other, not very common but some portable dvd devices use them and Amazon sell one. Personally I would use optcal for simplicity.



ok,so the cable you mentioned is what i thought it is..and i also thought of using this kind of cable in the past,but just one more question:  is a coax cable and normal (analog) rca interconnect are from the same meterial inside?  i.e are they exactly the same?
 
Sep 21, 2010 at 10:51 AM Post #79 of 90


Quote:
ok,so the cable you mentioned is what i thought it is..and i also thought of using this kind of cable in the past,but just one more question:  is a coax cable and normal (analog) rca interconnect are from the same meterial inside?  i.e are they exactly the same?


Not quite - a standard analog RCA cable is typically about 50ohms a digital coax cable is supposed to be 75ohm. In my experience I have used 50 and 75 ohm interchangeably with no difference.
 
If it is designed for digital use it should be bang on 75ohm, of course only the manufacturer will know this for sure
 
 
Sep 21, 2010 at 12:37 PM Post #80 of 90
Quote:
I propose adopting a new standard to replace S/PDIF: digital audio over TCP/IP!
 
  1. Error correction. Finally end the debate on jitter, reflections, and the 75 ohm business.
  2. Ethernet cable is well understood, affordable, and reliable.
  3. Connect both your desktop PC and your laptop to the same DAC with a router.
  4. You can even plug your DAC into a wireless router. They say the best cable is no cable.
  5. Directly play music from an iPhone to the DAC over wifi.
  6. Stream music from your home directly to your DAC at the office.
 
Sure, TPC/IP is not 100% reliable (but it's close) when used over the internet due to the sheer amount of traffic out there, but it is still a gajillion times better than if the internet was built on S/PDIF. "You say the online banking doesn't work? Try a glass toslink cable instead." *shudder* If you're concerned about lag and bandwidth congestion, then just use a separate LAN for your audio.
 
Internally, most if not all DACs use the I2S format for the actual DAC chip. It is the S/PDIF stage that suffers from jitter. The I2S part does not have such problems, but it is only suited for the short distances like those on the circuit board. So as an interim solution, we could have a TCP/IP to I2S transport to still continue to use some of the current crop of DACs. Some current DACs can input I2S, and many others can be modded for that.
 
Who's with me? Or is this a crazy idea?
 
 


The main problem with your suggestion is that you are trying to take time-sensitive data (audio) and use a time-insenstive protocol (TCP/IP) to transfer it. TCP/IP is dependent on the fact that it doesn't matter how fast or slow the packets are transferred between devices, or even what order they arrive in. It is the responsibility of the end device (and the devices in between in the various network layers), to reassemble the packets into usable data. For example, if I am trying to download a picture on a web page, the web server will break up the picture into many small pieces of data, place them into packets, and then send them to my computer, which will get the data from the packets and reassemble the picture. How fast or slow I receive these packets does not matter, as the ability to display the picture is not time-sensitive. After I finish downloading the picture, it will look exactly the same on my monitor regardless of whether I am using a 6Mbit DSL connection or a crappy 56kbit modem.
 
Audio data, on the other hand, is incredibly time-sensitive. The transport, whether it is a cd player or a computer, uses a clock to determine the rate at which digital audio information is sent to the DAC.  The problem with almost all digital audio transport protocols (I2S is an exception) is that they do not transmit the source's clock information. When the audio data reaches the DAC, it has to essentially guess what the clock rate is or somehow re-clock the data to its own clock. There are many different ways to do this, such as PLLs and data buffers, but the end goal is the same: feed the DAC chip a clean I2S signal with little to no jitter.
 
Using a computer complicates matters further because it runs many different processes, which are constantly swapped for each other on the CPU. As a result, there is inherent jitter in the signal, even if the average data rate is stable. A recent way to combat this problem is to use a USB to SPDIF device which slaves the computer to its on clock (i.e. asynchronous). However, the DAC still has to guess/re-clock the SPDIF signal it receives from such a converter because no clock information is transferred in the SPDIF protocol.
 
In conclusion, the problem with transferring digital audio has more to do with clocks than with inherent jitter in the cable. The fact that clock signals are not transferred to the DAC is what causes DAC manufacturers such a headache, since they have to re-clock the data. The real solution, IMO, is to develop a digital transfer protocol which actually contains the clock data. I2S does contain this information, but the way I2S data is transferred is not standardized and can only work for short runs in most cases.
 
Sep 21, 2010 at 3:32 PM Post #81 of 90
Thanks for the reply, chronomitch.
 
The time-insensitive nature of TCP/IP is actually one reason why I think it would be good for audio. It negates all the weaknesses of needing a special clock for timing. I don't mean to embed S/PDIF inside of TCP/IP since using S/PDIF at all will taint the whole process. Rather, the audio PCM data could be streamed over TCP/IP (Nebby pointed out that Squeezbox does this), and the receiver at the DAC end would convert that directly to I2S. As you said, files downloaded from the net always arrive as a perfect copy, unaffected by jitter or whatever. 
 
On the internet, time-insensitive could mean data packets being received in arbitrary order, and longish wait times. Indeed, this does make streaming audio over the net a less than perfect ...buffering... experience. But if we have a direct signal chain from the computer to the DAC, there will be no other data traffic to compete with, so there should not be any issue with delays and congestion.
 
The most important part of this is to avoid using S/PDIF altogether. It doesn't necessarily need to be TCP/IP. It could even be eSATA and work just as well, except maybe with limited range. I like TCP/IP because it is widely used and there are many ethernet parts available already, due to the abundance of network capable devices today.
 
If only Texas Instruments or Analog Devices would come up with an integrated circuit chip that does TCP/IP to I2S. Then it would be a simple matter for DAC manufacturers to implement this. Even my inkjet printer is network capable and has DHCP built-in to auto configure its IP address.
 
Sep 21, 2010 at 4:07 PM Post #82 of 90
Note that with a squeezebox, the PCM Data isn't streamed raw. Rather, I believe that the squeezebox server transcodes the source file to PCM WAV file which it then streams to the destination squeezebox. The infrastructure that would be required to do what your wishlist wants would essentially require a full-fledged computer at each end. You would need a full network card along with processor and sound card....essentially what a squeezebox is, so just imagine adding the cost of a squeezebox to your DAC and to your transport...
 
I2S is a hardware level protocol whereas TCP/IP is a transport protocol, while eSATA is yet another completely different protocol and specification. None of these three are remotely designed to interface with each other. If you want to design and engineer a chip to do some very serious processing to encapsulate signals between them then feel free to do so, but it's not as easy to design and implement something as it is to just put a few acronyms together.
wink.gif

 
OSI model: note where the physical layer (where the I2S standard is specified) is compared to the network layer (where the IP part of TCP/IP sits)
http://en.wikipedia.org/wiki/OSI_model
 
I2S: note that even though I2S signals can be transferred over Ethernet hardware, that does not mean it's being transmitted via ethernet protocol. There's a difference.
http://en.wikipedia.org/wiki/I%C2%B2S
 
eSATA Spec:
http://www.sata-io.org/documents/External%20SATA%20WP%2011-09.pdf
 
 
As a side note I'm mulling over the idea of using a stripped down Squeezebox touch with a Tentlab XO clock module linked with a DAC that is using a Tentlab XO-DAC module, then tie the clocks together so the Squeezebox Touch will be using the DAC as a reference (a la "Tent Link mode"). This would virtually eliminate jitter but would reduce the functionality of the whole thing since I would only have a limited selection of sample rates to choose from (due to only having one clock). At the very least nobody could argue that the transport -> DAC connection is full of jitter
tongue.gif

 
Sep 21, 2010 at 4:28 PM Post #83 of 90
Quote:
Thanks for the reply, chronomitch.
 
The time-insensitive nature of TCP/IP is actually one reason why I think it would be good for audio. It negates all the weaknesses of needing a special clock for timing. I don't mean to embed S/PDIF inside of TCP/IP since using S/PDIF at all will taint the whole process. Rather, the audio PCM data could be streamed over TCP/IP (Nebby pointed out that Squeezbox does this), and the receiver at the DAC end would convert that directly to I2S. As you said, files downloaded from the net always arrive as a perfect copy, unaffected by jitter or whatever. 
 
On the internet, time-insensitive could mean data packets being received in arbitrary order, and longish wait times. Indeed, this does make streaming audio over the net a less than perfect ...buffering... experience. But if we have a direct signal chain from the computer to the DAC, there will be no other data traffic to compete with, so there should not be any issue with delays and congestion.
 
The most important part of this is to avoid using S/PDIF altogether. It doesn't necessarily need to be TCP/IP. It could even be eSATA and work just as well, except maybe with limited range. I like TCP/IP because it is widely used and there are many ethernet parts available already, due to the abundance of network capable devices today.
 
If only Texas Instruments or Analog Devices would come up with an integrated circuit chip that does TCP/IP to I2S. Then it would be a simple matter for DAC manufacturers to implement this. Even my inkjet printer is network capable and has DHCP built-in to auto configure its IP address.


You're still going to have the same problem: the lack of a good clock signal. Placing a data stream with a clock signal (I2S, for example) on TCP/IP will make said clock signal useless due the time-insensitive nature of TCP/IP.
 
Think of a clock signal as a simple electronic pulse. When the pulse occurs, it acts as a signal to the DAC chip saying that it should count the current 24-bit piece of audio data as valid and convert it to an analog electrical signal. Now imagine you take that electronic pulse, along with any relevant audio data, divide it up, stuff it into TCP packets, and send it across a network. Once you receive and unwrap the data on the other end of the connection, the clock signal has lost its original purpose. The DAC will need to buffer the audio data and reclock it using its own clock to create an I2S signal to send to the DAC chip.
 
There is no reason raw audio data cannot be streamed across TCP/IP. However, you will get problems trying to play the data because the source and the receiver will use their own clocks, which will never be completely in sync. Even if you use a buffer to remove jitter from the transfer protocol (which is a good idea), you will always either run out of buffer space or never be able to fill the buffer due to the difference in clock rates. This would cause either drop outs in the audio or the loss of audio samples, since some would be skipped. In order to keep the buffer in a "safe" state, the receiver's clock would need to automatically adjust based on the rate the buffer is being filled, which will cause jitter. The only way I can see to prevent this, IMO, is to have a large enough buffer to hold an entire song worth of data. Then the receiver can play that data at its own rate without worrying about the source's clock. However, that's more something a full-fledged computer would do, not a standalone DAC. Moreover, that's really just cheating by transferring all of the audio data from the source to the receiver before it is played. Music would be played real-time.
 
Sep 21, 2010 at 6:08 PM Post #84 of 90
 
[size=medium]
Quote:
You're still going to have the same problem: the lack of a good clock signal. Placing a data stream with a clock signal (I2S, for example) on TCP/IP will make said clock signal useless due the time-insensitive nature of TCP/IP.
 
Think of a clock signal as a simple electronic pulse. When the pulse occurs, it acts as a signal to the DAC chip saying that it should count the current 24-bit piece of audio data as valid and convert it to an analog electrical signal. Now imagine you take that electronic pulse, along with any relevant audio data, divide it up, stuff it into TCP packets, and send it across a network. Once you receive and unwrap the data on the other end of the connection, the clock signal has lost its original purpose. The DAC will need to buffer the audio data and reclock it using its own clock to create an I2S signal to send to the DAC chip.
 
There is no reason raw audio data cannot be streamed across TCP/IP. However, you will get problems trying to play the data because the source and the receiver will use their own clocks, which will never be completely in sync. Even if you use a buffer to remove jitter from the transfer protocol (which is a good idea), you will always either run out of buffer space or never be able to fill the buffer due to the difference in clock rates. This would cause either drop outs in the audio or the loss of audio samples, since some would be skipped. In order to keep the buffer in a "safe" state, the receiver's clock would need to automatically adjust based on the rate the buffer is being filled, which will cause jitter. The only way I can see to prevent this, IMO, is to have a large enough buffer to hold an entire song worth of data. Then the receiver can play that data at its own rate without worrying about the source's clock. However, that's more something a full-fledged computer would do, not a standalone DAC. Moreover, that's really just cheating by transferring all of the audio data from the source to the receiver before it is played. Music would be played real-time.



[/size]

 
I don't see a problem streaming sampled audio over TCP. TCP doesn't use clocks.
 
The sender sends as fast as possible. The receiver receives as fast as possible, so that the buffer is always filled. TCP handles the rest (flow control, congestion control, error detection ...).
The receive buffer needs to be big enough to hold a few milliseconds worth of audio samples (depending on the latency).
 
Sep 21, 2010 at 6:24 PM Post #85 of 90
Quote:
I don't see a problem streaming sampled audio over TCP. TCP doesn't use clocks.
 
The sender sends as fast as possible. The receiver receives as fast as possible, so that the buffer is always filled. TCP handles the rest (flow control, congestion control, error detection ...).
The receive buffer needs to be big enough to hold a few milliseconds worth of audio samples (depending on the latency).

 
My whole point in this line of conversation is that replacing SPDIF transfers with something like TCP is not going to fix the the underlying problem: DACs must reconstruct the clock from the audio signal. I'm sure TCP will work for transferring audio, but it won't be any better than SPDIF.
 
In order to get around this whole problem, professionals use an external master clock which drives both the source and the receiver.
 
Sep 22, 2010 at 9:00 AM Post #86 of 90
 
Quote:
DACs must reconstruct the clock from the audio signal.

 
Is this statement always true?
 
I ask not to start a row but because I'm uncertain myself and would like confirmation.
 
I own a MOTU Ultralite which isn't really pro quality gear. It does costs less than an equivalent audiophile grade product. There are a wide range of clocking options offered by this device.
 
It is my current understanding that if I feed the MOTU a signal via Firewire or USB 2.0 it arrives bit perfect from the source, stored temporarily in a buffer and is then presented to the DAC having been reclocked internally. I have to set the clock on the MOTU controls to 'Internal' for this to occur.
 
I also have the option of setting the clock to S/PDIF if I want to process a digital source signal where the timing information is contained in the input signal. i.e. from a naked CD transport, Mini disk player or like many of you guys do via a USB > S/PDIF dongle like the Hi-face gadget.  There is also a SMPTE console which allows the user to connect and specify any device to the MOTU and define it as the master clock for any or all devices on the network. The MOTU clock itself can act as master to any other externally connected device as well. 
 
So unless someone out there can correct me I currently working on the assumption that we are talking at cross purposes here. Audio data stored on a hard disk is always bit perfect via USB but doesn't contain timing data directly. Audio data read directly from a traditional CD transport isn't necessarily bit perfect but contains the timing info in the signal (S/PDIF or equivalent )
 
So for those of us using pro-am quality DAC devices via USB an USB > S/PDIF is not so much irelevant but positively undesirable. For audiophiles coming from the traditional route (i.e. from a dedicated CD transport, bypassing the internal DAC and using an older design chip then you don't have a reliable internal buffer or clock but must feed it a S/PDIF signal as if it was a CD player but in two separate boxes for best results).
 
Does that make sense? Pro-Am audio interface - use USB or Firewire. Audiophile grade DAC - reclock externally?
 
So everyone is right? Depends on the context.
 
 
Sep 22, 2010 at 10:42 AM Post #87 of 90
Quote:
 
 
Is this statement always true?
 
I ask not to start a row but because I'm uncertain myself and would like confirmation.
 
I own a MOTU Ultralite which isn't really pro quality gear. It does costs less than an equivalent audiophile grade product. There are a wide range of clocking options offered by this device.
 
It is my current understanding that if I feed the MOTU a signal via Firewire or USB 2.0 it arrives bit perfect from the source, stored temporarily in a buffer and is then presented to the DAC having been reclocked internally. I have to set the clock on the MOTU controls to 'Internal' for this to occur.
 
I also have the option of setting the clock to S/PDIF if I want to process a digital source signal where the timing information is contained in the input signal. i.e. from a naked CD transport, Mini disk player or like many of you guys do via a USB > S/PDIF dongle like the Hi-face gadget.  There is also a SMPTE console which allows the user to connect and specify any device to the MOTU and define it as the master clock for any or all devices on the network. The MOTU clock itself can act as master to any other externally connected device as well. 
 
So unless someone out there can correct me I currently working on the assumption that we are talking at cross purposes here. Audio data stored on a hard disk is always bit perfect via USB but doesn't contain timing data directly. Audio data read directly from a traditional CD transport isn't necessarily bit perfect but contains the timing info in the signal (S/PDIF or equivalent )
 
So for those of us using pro-am quality DAC devices via USB an USB > S/PDIF is not so much irelevant but positively undesirable. For audiophiles coming from the traditional route (i.e. from a dedicated CD transport, bypassing the internal DAC and using an older design chip then you don't have a reliable internal buffer or clock but must feed it a S/PDIF signal as if it was a CD player but in two separate boxes for best results).
 
Does that make sense? Pro-Am audio interface - use USB or Firewire. Audiophile grade DAC - reclock externally?
 
So everyone is right? Depends on the context.
 


SPDIF does not contain master clock data, no matter whether it comes from a CD transport, USB to SPDIF converter, or a computer sound card. In the case of most DAC chips, this SPDIF signal must first be run through an SPDIF input receiver, which converts the signal to I2S. During this process, clock data is generated because the I2S format requires it. I2S contains clock data and is the native format accepted by most DAC chips.
 
There are a few DAC chips, such as the ES9018, which actually combine the features of the SPDIF input receiver and digital to analog converters. In other words, they can accept SPDIF signals natively.
 
Also, the clock data is not something that is inherently a part of the audio data; it is something generated by either the source, receiver, or both. It has nothing to do with the audio signal being bit perfect. SPDIF is perfectly capable of sending bit perfect data. However, sometimes the source messes with the bits before they are transferred into the SPDIF format. For example, in certain cases the Windows audio mixer will up-sample, down-sample, or cut some bits off the audio signal. This is why many people use WASAPI output in their Windows music players.
 
Sep 22, 2010 at 12:33 PM Post #89 of 90
> That's what I though also Pars. Confusing.
 
Postt #1 states
 
Quote:
DACs must reconstruct the clock from the audio signal

 
Then in the following post #2
 
 
Quote:
SPDIF does not contain master clock data,

 
So I don't know if there is anything to be learn here after all. Perhaps we are talking about different clocks for different purposes.
 
Sep 25, 2010 at 7:16 PM Post #90 of 90
Clocks signal can be embedded in packets, see IEEE1588 clock synchronization.
 
There is already a way to stream audio/video signal across an IP network, see IEEE802.1as standard. This capability is now build into chips from Broadcom and Marvell. You will soon see them in commodity consume5r electronic. Professional gear makers already have system built with this standard. Good news is this standard solved the streaming problem, but the bad news is it is not compatible with legacy network.
 
Regarding clocks; it is a very confusing issue. Many people do not understand that it is the transport clock (the clock that is used to clock the transmission of the data) that matters, It is the media clock that matters. Audio must be played with the original recording sampling rate or you will have a buffer overrun or underrun. Example, music recorded with a 44.105KHz must be played back with the same clock. This clock must be reconstructed from the original audio data.
 

Users who are viewing this thread

Back
Top