manueljenkin
100+ Head-Fier
- Joined
- Sep 18, 2015
- Posts
- 410
- Likes
- 218
I'm pulling in a few technical posts, but nothing relating to measurements but rather relating to guessing what I'm hearing :: please let me know if it violates any rules.
I received my supra cable 2 days ago and have been comparing with the stock cable on the apogee groove. I believe I'm hearing a difference, but can't say for sure. I tried doing like 30 swaps and gave it two days. At the moment, I'm kinda convinced that there is a difference. Unfortunately I'm not sure if the difference is for good, the supra maybe sounds softer and bass has a weird character (off phase or something maybe?). Maybe the cable is worse, or maybe it is revealing more of issues in windows. (more in latter passages).
There does seem to be a possible technical reason since I've had serious issues with choosing music players for windows - almost every single music player sounds different (even out of asio). I can blind tell between foobar2000 and winyl easily, both in asio configurations, and the other music players I've tried - musicbee, and few others were a total disaster in comparison, sounded very low passed and dull. The stock groove player is terrible and has serious buffering artefacts and heavy amount of low passing. There are songs where I need to replay from beginning to get proper output on groove player (if i scroll around, some details go missing). In a lot of ways I can compare the stock cable vs supra to be similar "in feel" to the difference between winyl and foobar2000. I'll try on a linux machine at a later time and update my post.
I can kind of sum it up this way.
The difference between
1. Foobar 2000 and winyl
2. Apogee groove and geek out
3. Supra cable and stock cable
Are relatively similar feeling phenomena.
The latter's sound more in your face and slightly forced. The former's sound a tad too unforced. I think that unforced thing comes from windows. I can hear changes in multiple areas but it doesn't really translate into better. I am running out of my Surface book, and I for sure know that windows schedulers are not well optimized for Audio. It's hard to analyse anything of these sort in windows. Will check on linux and mac soon. I'll also try the same with a DAW which might force windows to prioritize audio (since it'll capture a lot of the memory for the processes). That may be a much better ground truth. Hopefully I'll get a better clue. Underlying cause of groove vs geek out is different tho. The rest two may be scheduler artefacts or channel artefacts.
Please do try the winyl vs Foobar comparison (both in asio). It's super audible in almost any gear. I'm confused which is right. I still have to think more about usb difference. If there's an error on the normal cable. It shouldn't sound the same everytime. Error is random. Not constant. There are ways this might be masked in the later stages in DAC filters!!
So some technical analysis of the same.
USB has got 4 pins - power, ground, data+, data-.. so data is sent as a differential. Issues can somewhat be safely traced back to "timing" and transistor pull up/impedance matching. Basically all digital signal are still an analog waveform with an eye pattern. They are just discretized in terms of usable states. They still need to be sampled by transistors at the phy layer. And there's concepts of impedance matching to be optimal point in load curve for those transistors. 90ohm is the recommended one iirc.
The biggest change I've been able to perceive and quantify so far is 1. channel separation 2. A sense of dryness in bass (similar to polarity/phase offset). It doesn't mean it's wider soundstage or anything but, it feels like the sides are more well defined but bass feels enigmatic. Works in one song, doesn't work in another. I do think the channel separation has an explanation. (https://www.usb.org/sites/default/files/audio10.pdf) - 3.4 Inter Channel Synchronization - "It is up to the host software to synchronize the different audio streams by scheduling the correct packets at the correct moment, taking into account the internal delays of all audio functions involved". I think this is an old USB spec, but I still think the content should still hold.
I pretty much suspect that usb is prone to errors, otherwise why would they have such highly defined sequences for error correction and re-transmission on the other modes for file transfer.
Basically usb has different modes for transfer of data.
The normal mode for network/file transfer supports error correction and re-transmission. However it's not true for audio. We used to have isosynchronous protocols (where there is a specific timing maintianed and it sends data uniformly over time) of 125us per transfer and 8000 frames per transfer. This would have meant alternating frame count every second - 5 in first second for 40000 samples and 6 in next 48000 samples averaging 88000 for two seconds (the 200 samples will have to be interpolated). If you had audio in 48000hz, no interpolation would have been necessary.
For asynchronous audio (after usb 2.0), it uses 1ms poll time with tonnes of buffer (defined between the dac and host during initial communication) and the clock is now determined by the dac. The dac requests data when it desires and the usb bus is supposed to buffer it and send it following the clock of the USB interface in the dac. So we have been able to remove the effect of jitter quite a bit. But now the computer needs to make sure it responds to this request timely. It is still not error correcting since there's not much time for re-transmission. Also, USB transfer is through serial data packets. All your volume control info, info for both the audio channels are sent together with a specific framing structure. The Xmos and other interface thing is supposed to decode back the different channel info and send it to the i2s interface of actual dac chip. No one knows how it manages if it gets a erroneous data package or if the stereo information is messed up (just shuffling, glitch or jitter is enough to create sampling artefacts and mess things up).
One may wonder, what's the big deal!. You'll understand the deal when you understand that the essence of a DAC is its "timing". Pro Interfaces have their own master clock and interfaces (like Focusrite Rednet).
https://www.xmos.com/file/fundamentals-of-usb-audio
Few Quotes I've found in online forums that I'm re-quoting (I'm not exactly sure of the validity of the quotes) ::
1. 90 ohms is the USB standard impedance for signal transfer. Cables, connectors, traces should all have this characteristic from what I gather from the USB spec. It is actually +- 15%, so 78 - 104 ohms is within spec, 90 being the target. As to "why" they chose that spec, I am not sure, only that once such a spec is decided upon, maintaining it allows for better signal transfer. Here, we are talking about power and data, over separate wire pairs, all housed within the same cables, so I don't think the same rules should apply as with analog audio transmission or those of SPDIF digital audio data transmission. It seems more compatible with power and noise rejection and then allowing the connected devices to control the operations.
2. The data is serialized on the wire and sent bit by bit. Loss of some bits doesn't necessarily result in corrupted audio but the information ends up being altered because of that. There is no error correction or re-transmission when an error is detected, the receiver is likely to just drop a sample and extrapolate the missing value from the surrounding known good samples. So it is at least theoretically possible for a USB cable to have an effect on SQ, I'm just surprised it doesn't take an obviously crappy cable to hear the difference.
3. There is no re-transmission in Isochronous Transfer mode, the best receiver can do is CRC-check and and drop broken sample(s) and interpolate. Seems that USB jitter is actually a problem, or at least was in the early days. Now that the problem is well understood there are solutions to deal with it. Still, less jitter to begin with is always better I would think
4. (Commenting on Xmos audio fundamentals document) - Interesting read. But the isochronous, control, and interrupt mentioned are the raw USB interfaces, or building blocks. If you scroll further down, they talk about sending extra sample data (8 extra samples per second). Doesn't this indicate that something at the receiving end is doing something with those samples to ensure data accuracy? What happens with those extra samples? The fact that you're sending any extra data indicates that the receiver is not blindly passing bits to it's DAC, it must be doing something with all of the incoming data before passing any of it along, otherwise the extra sample data would disrupt the audio signal.
5. Guys that keep talking about 1's and 0's.......there's gotta be more to this. Too many people are claiming they are hearing the same things/differences. Speaking of specs, I scanned through the USB 2.0 specification and I'd say 25% of the document is devoted to error detection, recovery, re-transmits, etc., and the document was written assuming the transmission media that is within the spec. From that, even being "within the spec", doesn't necessarily mean error free to me.
Last but not the least, I searched around and I've seen people online say the same perceived change, independently. So there must be more to this. https://www.hifisystemcomponents.com/forum/usb-cable-burnin_topic1613.html
I think if I have access to a xmos/other usb interface that mimics the dac function, except replacing the Dac with a logger that stores data from i2s streams in a memory, It is possible to test what is happening.
Thanks and Regards,
Manuel Jenkin.
I received my supra cable 2 days ago and have been comparing with the stock cable on the apogee groove. I believe I'm hearing a difference, but can't say for sure. I tried doing like 30 swaps and gave it two days. At the moment, I'm kinda convinced that there is a difference. Unfortunately I'm not sure if the difference is for good, the supra maybe sounds softer and bass has a weird character (off phase or something maybe?). Maybe the cable is worse, or maybe it is revealing more of issues in windows. (more in latter passages).
There does seem to be a possible technical reason since I've had serious issues with choosing music players for windows - almost every single music player sounds different (even out of asio). I can blind tell between foobar2000 and winyl easily, both in asio configurations, and the other music players I've tried - musicbee, and few others were a total disaster in comparison, sounded very low passed and dull. The stock groove player is terrible and has serious buffering artefacts and heavy amount of low passing. There are songs where I need to replay from beginning to get proper output on groove player (if i scroll around, some details go missing). In a lot of ways I can compare the stock cable vs supra to be similar "in feel" to the difference between winyl and foobar2000. I'll try on a linux machine at a later time and update my post.
I can kind of sum it up this way.
The difference between
1. Foobar 2000 and winyl
2. Apogee groove and geek out
3. Supra cable and stock cable
Are relatively similar feeling phenomena.
The latter's sound more in your face and slightly forced. The former's sound a tad too unforced. I think that unforced thing comes from windows. I can hear changes in multiple areas but it doesn't really translate into better. I am running out of my Surface book, and I for sure know that windows schedulers are not well optimized for Audio. It's hard to analyse anything of these sort in windows. Will check on linux and mac soon. I'll also try the same with a DAW which might force windows to prioritize audio (since it'll capture a lot of the memory for the processes). That may be a much better ground truth. Hopefully I'll get a better clue. Underlying cause of groove vs geek out is different tho. The rest two may be scheduler artefacts or channel artefacts.
Please do try the winyl vs Foobar comparison (both in asio). It's super audible in almost any gear. I'm confused which is right. I still have to think more about usb difference. If there's an error on the normal cable. It shouldn't sound the same everytime. Error is random. Not constant. There are ways this might be masked in the later stages in DAC filters!!
So some technical analysis of the same.
USB has got 4 pins - power, ground, data+, data-.. so data is sent as a differential. Issues can somewhat be safely traced back to "timing" and transistor pull up/impedance matching. Basically all digital signal are still an analog waveform with an eye pattern. They are just discretized in terms of usable states. They still need to be sampled by transistors at the phy layer. And there's concepts of impedance matching to be optimal point in load curve for those transistors. 90ohm is the recommended one iirc.
The biggest change I've been able to perceive and quantify so far is 1. channel separation 2. A sense of dryness in bass (similar to polarity/phase offset). It doesn't mean it's wider soundstage or anything but, it feels like the sides are more well defined but bass feels enigmatic. Works in one song, doesn't work in another. I do think the channel separation has an explanation. (https://www.usb.org/sites/default/files/audio10.pdf) - 3.4 Inter Channel Synchronization - "It is up to the host software to synchronize the different audio streams by scheduling the correct packets at the correct moment, taking into account the internal delays of all audio functions involved". I think this is an old USB spec, but I still think the content should still hold.
I pretty much suspect that usb is prone to errors, otherwise why would they have such highly defined sequences for error correction and re-transmission on the other modes for file transfer.
Basically usb has different modes for transfer of data.
The normal mode for network/file transfer supports error correction and re-transmission. However it's not true for audio. We used to have isosynchronous protocols (where there is a specific timing maintianed and it sends data uniformly over time) of 125us per transfer and 8000 frames per transfer. This would have meant alternating frame count every second - 5 in first second for 40000 samples and 6 in next 48000 samples averaging 88000 for two seconds (the 200 samples will have to be interpolated). If you had audio in 48000hz, no interpolation would have been necessary.
For asynchronous audio (after usb 2.0), it uses 1ms poll time with tonnes of buffer (defined between the dac and host during initial communication) and the clock is now determined by the dac. The dac requests data when it desires and the usb bus is supposed to buffer it and send it following the clock of the USB interface in the dac. So we have been able to remove the effect of jitter quite a bit. But now the computer needs to make sure it responds to this request timely. It is still not error correcting since there's not much time for re-transmission. Also, USB transfer is through serial data packets. All your volume control info, info for both the audio channels are sent together with a specific framing structure. The Xmos and other interface thing is supposed to decode back the different channel info and send it to the i2s interface of actual dac chip. No one knows how it manages if it gets a erroneous data package or if the stereo information is messed up (just shuffling, glitch or jitter is enough to create sampling artefacts and mess things up).
One may wonder, what's the big deal!. You'll understand the deal when you understand that the essence of a DAC is its "timing". Pro Interfaces have their own master clock and interfaces (like Focusrite Rednet).
https://www.xmos.com/file/fundamentals-of-usb-audio
Few Quotes I've found in online forums that I'm re-quoting (I'm not exactly sure of the validity of the quotes) ::
1. 90 ohms is the USB standard impedance for signal transfer. Cables, connectors, traces should all have this characteristic from what I gather from the USB spec. It is actually +- 15%, so 78 - 104 ohms is within spec, 90 being the target. As to "why" they chose that spec, I am not sure, only that once such a spec is decided upon, maintaining it allows for better signal transfer. Here, we are talking about power and data, over separate wire pairs, all housed within the same cables, so I don't think the same rules should apply as with analog audio transmission or those of SPDIF digital audio data transmission. It seems more compatible with power and noise rejection and then allowing the connected devices to control the operations.
2. The data is serialized on the wire and sent bit by bit. Loss of some bits doesn't necessarily result in corrupted audio but the information ends up being altered because of that. There is no error correction or re-transmission when an error is detected, the receiver is likely to just drop a sample and extrapolate the missing value from the surrounding known good samples. So it is at least theoretically possible for a USB cable to have an effect on SQ, I'm just surprised it doesn't take an obviously crappy cable to hear the difference.
3. There is no re-transmission in Isochronous Transfer mode, the best receiver can do is CRC-check and and drop broken sample(s) and interpolate. Seems that USB jitter is actually a problem, or at least was in the early days. Now that the problem is well understood there are solutions to deal with it. Still, less jitter to begin with is always better I would think
4. (Commenting on Xmos audio fundamentals document) - Interesting read. But the isochronous, control, and interrupt mentioned are the raw USB interfaces, or building blocks. If you scroll further down, they talk about sending extra sample data (8 extra samples per second). Doesn't this indicate that something at the receiving end is doing something with those samples to ensure data accuracy? What happens with those extra samples? The fact that you're sending any extra data indicates that the receiver is not blindly passing bits to it's DAC, it must be doing something with all of the incoming data before passing any of it along, otherwise the extra sample data would disrupt the audio signal.
5. Guys that keep talking about 1's and 0's.......there's gotta be more to this. Too many people are claiming they are hearing the same things/differences. Speaking of specs, I scanned through the USB 2.0 specification and I'd say 25% of the document is devoted to error detection, recovery, re-transmits, etc., and the document was written assuming the transmission media that is within the spec. From that, even being "within the spec", doesn't necessarily mean error free to me.
Last but not the least, I searched around and I've seen people online say the same perceived change, independently. So there must be more to this. https://www.hifisystemcomponents.com/forum/usb-cable-burnin_topic1613.html
I think if I have access to a xmos/other usb interface that mimics the dac function, except replacing the Dac with a logger that stores data from i2s streams in a memory, It is possible to test what is happening.
Thanks and Regards,
Manuel Jenkin.
Last edited: