sound differences in DIGITAL cables
Jun 17, 2009 at 8:51 AM Post #61 of 109
Quote:

Originally Posted by dvw /img/forum/go_quote.gif
Fortunately, we are talking about something tangible here. Cable induced jitter surely is measurable. Is there any meaningful jitter generated by 1 meter of cable?


Don't know about the «meaningfulness», but a cable length below 1 m reportedly generates higher jitter than one of 1.5 m – due to line reflections.
.
 
Jun 17, 2009 at 8:00 PM Post #62 of 109
Quote:

Originally Posted by Currawong /img/forum/go_quote.gif
Link?


http://www.head-fi.org/forums/f7/lav...23/index2.html

Quote:

Originally Posted by JaZZ /img/forum/go_quote.gif
Don't know about the «meaningfulness», but a cable length below 1 m reportedly generates higher jitter than one of 1.5 m – due to line reflections.
.



Nope! Don't know where you got that one from but the basic rule is shorter is better.

G
 
Jun 17, 2009 at 8:13 PM Post #63 of 109
Quote:

Originally Posted by gregorio /img/forum/go_quote.gif
http://www.head-fi.org/forums/f7/lav...23/index2.html



Nope! Don't know where you got that one from but the basic rule is shorter is better.



I just read that lavry post. it correlates pretty much 100% with my experience in spdif, as well.

but I disagree a bit here, when he says:


RCA is a single ended signal (un balanced), thus no common mode rejection. There is typically no transformer isolation,


if you use coax, you get no common mode since you have more shield (one wire) exposed than the center wire, therefore you can't get good 'even' (the key) cancellation of common-mode noise.

however, you could run spdif consumer format (not AES/EBU which is close but not the same frame, exactly) over twisted pair, shielded or not, and if you are transformer isolated at both ends you do end up with a cost-effective balanced mode 'low voltage' arrangement. the connectors don't even matter that much in reality.

as for the distance and reflections, I do beleve that 'too short' could have timing issues with data and reflections. but I'm not convinced the amplitude of the reflections compard to the actual signal is enough to cause harm in decoding data/clock from the stream. in digital, you kind of can get away with a lot of evil - up to when you can't
wink.gif
but if you're under the threshold, as long as you can get clock and data and pick the bits out, you're fine.

also, in terms of consumer spdif and transformers, in medium and all high end gear, you'll find pulse transformers as isolation. on cheap things (like my popcorn hour media streamer box) they give you coax-out but its not transformer isolated ;( I installed a pulse trafo and the right R and C in the right places and now I'm 'back in spec' again.
 
Jun 18, 2009 at 4:39 AM Post #64 of 109
Jun 18, 2009 at 5:34 AM Post #65 of 109
Quote:

Originally Posted by Pars /img/forum/go_quote.gif
Perhaps not...

spdif
One reference.



quoting the article:


The slow rise-time has an advantage as well. Discontinuities in characteristic impedance, such as circuit board traces, funky wiring and RCA connectors will have less effect on the signal integrity or voltage "shape." Because the transitions are slow, reflections are not as high in amplitude and therefore have less effect on the jitter. However, the penalty is paid at the receiver chip where the slow edge causes uncertainty of when the transition actually took place. Jitter is created by the receiver chip when it inaccurately senses the slow transition.


uncertainty on a SLOW moving wave? ie, if the slope is gradual, it sounds like a threshold-compare would work _well_ there.

I don't follow that reasoning. anyone else follow it?

I do agree that impedance is almost irrelevant as the receiver doesn't really care about how 'pretty' the wave is. it really doesn't care as much as 'audiophiles' do
wink.gif


if you are designing microwave rf and its transmitting, characteristic z matters. for very very weak signals on the receive end, it matters.

for a half volt over short haul (same house) distance, no, it does not matter.
 
Jun 18, 2009 at 2:07 PM Post #66 of 109
That section of the article is discussing what manufacturers do in order to pass FCC Part 15 (IIRC) standards WRT emission. It is not recommended for best performance, as he goes on to explain. Jocko has lots of stuff to say about SPDIF and digital cables on both diyaudio and diyhifi, and he is an RF engineer. The clock on an SPDIF signal is running at ~2.8MHz, so is RF. And basically analog as the timing is what is of interest, not the levels. Crappy design (embedded clock) but that's what we're stuck with unless you use something like a Tent-link.
 
Jun 18, 2009 at 2:11 PM Post #67 of 109
2.8mhz is very very slow. no special 'lead dress' is needed for such low frequencies.

I still reject the transmission line concept since digital 'does not care' like analog would.

waveforms CAN be quite bad and still digital receivers lock on.

tempest in a teacup, at least this day and age (not 15 yrs ago but I do believe its a non-issue today).
 
Jun 18, 2009 at 6:28 PM Post #68 of 109
Quote:

Originally Posted by linuxworks /img/forum/go_quote.gif
I just read that lavry post. it correlates pretty much 100% with my experience in spdif, as well.

but I disagree a bit here, when he says:


RCA is a single ended signal (un balanced), thus no common mode rejection. There is typically no transformer isolation,


if you use coax...

...as for the distance and reflections, I do beleve that 'too short' could have timing issues with data and reflections.... ...in digital, you kind of can get away with a lot of evil....



You said:

“however, you could run spidf consumer format (not AES/EBU which is close but not the same frame, exactly) over twisted pair, shielded or not, and if you are transformer isolated at both ends you do end up with a cost-effective balanced mode 'low voltage' arrangement. the connectors don't even matter that much in reality.”

Cost effective, maybe. But for sure poor performance relative to a true balanced system. The 2 conductor coax, with transformer at each end, is not very well balanced scheme. The outer conductor (shield) and the inner wire are not symmetrical with respect to outside interference.

The XLR three wire system has 2 inner wires thus occupying nearly the same space, thus picking nearly the same interference. And on top of it, there is a shield surrounding BOTH.

Also, an XLR receiver circuit is a BALNCED RECEIVER. Whatever common mode pickup signal does come trough, however small, get removed by the balanced receiver. The circuit subtracts the voltage on wire A from wire B. If both wires picked up something due to imperfect shielding, as long as the noise voltage is the same on both wires, subtracting will take care of it. That is an additional improvement.

Also, you said:

“as for the distance and reflections, I do believe that 'too short' could have timing issues with data and reflections. but I'm not convinced the amplitude of the reflections compard to the actual signal is enough to cause harm in decoding data/clock from the stream. in digital, you kind of can get away with a lot of evil - up to when you can't
wink.gif
but if you're under the threshold, as long as you can get clock and data and pick the bits out, you're fine.”

I do not know why you chose to believe that too short may have timing issues. This subject is not about believing, likes, dislikes or personal opinions, it is about facts dictated to us by Mother Nature. The fact is: the shorter the wire the LESS timing issues. The problem is with too long, not with too short.

Much of digital is indeed about going over and under a threshold, such as storing music on a CD, internet communication of data files and much more. But digital audio is not just about going over and under a threshold. The TIMING of going over an under a threshold is also very important. The whole idea here is to keep the same time interval between samples. Think of a movie projector. Yes, it has to project each frame correctly, but the rate of frames has to be correct and steady.

Now, digital is a concept. The signal is 0 or 1. But in the real world, it takes time to move between 0 and 1 (or 1 and 0). In fact, digital signals are really analog in the real world. You magnify the picture, and you see what I am saying. In fact, one deals with running currents, charging capacitors, dealing with inductance, circuit resistance, complex semiconductors characteristics… The threshold takes effect on a signal between 0 and 1, on the slop we call rise time or fall time. You change the rise time, the DC or the threshold itself and the detection time is effected. Thus you are at a mercy of many variables, from temperature effects to ground loop currents, from interference to termination tolerance at an end of a long cable.

The amplitude of reflection in a well terminated system may be small, but the reflections are not in sync with the data transmission, they happen at a different rate (depending on the cable length and to lesser degree on the insulating material properties – dielectric constant). Such interference does impact the “detection time” of a digital transition.
And note that when the cable is short, the refection amplitude is small. So short is good. Such is always the case.

It just so happened that the ear is very sensitive, and it takes very little timing error to become a sonic distortion.

Regards
Dan Lavry
 
Jun 18, 2009 at 6:58 PM Post #69 of 109
Quote:

Originally Posted by Dan Lavry /img/forum/go_quote.gif
You said:

“however, you could run spidf consumer format (not AES/EBU which is close but not the same frame, exactly) over twisted pair, shielded or not, and if you are transformer isolated at both ends you do end up with a cost-effective balanced mode 'low voltage' arrangement. the connectors don't even matter that much in reality.”

Cost effective, maybe. But for sure poor performance relative to a true balanced system. The 2 conductor coax, with transformer at each end, is not very well balanced scheme. The outer conductor (shield) and the inner wire are not symmetrical with respect to outside interference.



I was referring to 2 internal wires, twisted, AND one shield.

in that case, it is fully balanced. both wires inside are 'equal' and the shield helps a little but does not carry signal current at all.

Quote:

The XLR three wire system has 2 inner wires thus occupying nearly the same space, thus picking nearly the same interference. And on top of it, there is a shield surrounding BOTH.


yes, I just said that, in effect
wink.gif
I don't care as much about the connector; I could do the same with a db9 and get the same level of balanced operation.

Quote:

Also, an XLR receiver


you mean a balanced receiver circuit. xlr is just a connector, as I'm fully sure you know
wink.gif


Quote:

circuit is a BALNCED RECEIVER. Whatever common mode pickup signal does come trough, however small, get removed by the balanced receiver.


in fact, even JUST having a trafo there is enough to get you the benefit of a balanced receiver.

Quote:

Also, you said:

“as for the distance and reflections, I do believe that 'too short' could have timing issues with data and reflections. but I'm not convinced the amplitude of the reflections compard to the actual signal is enough to cause harm in decoding data/clock from the stream. in digital, you kind of can get away with a lot of evil - up to when you can't
wink.gif
but if you're under the threshold, as long as you can get clock and data and pick the bits out, you're fine.”

I do not know why you chose to believe that too short may have timing issues.


based on another paper I read (I think it was linked in this thread) that talks about time to propagate the wave to the 'far end' of the cable and back. it does take time to traverse there and back and my read of that paper suggests that some overlap could be harmful (to the pretty waveform) if the reflection bounced back and overlaps the data. I have not SEEN this, and while it sounds plausible, I'm not 100% sure it matters. that's why I used the word 'believe'. I've never HEARD a reflection 'matter', fwiw.

Quote:

Much of digital is indeed about going over and under a threshold, such as storing music on a CD, internet communication of data files and much more. But digital audio is not just about going over and under a threshold. The TIMING of going over an under a threshold is also very important. The whole idea here is to keep the same time interval between samples. Think of a movie projector. Yes, it has to project each frame correctly, but the rate of frames has to be correct and steady.


there is slop-time, as I understand it, to allow the PLL's to lock in. they usually have their own local clock and do micro adjustments (that's my understanding) and so they locally reclock the data. the accuracy is more determined by that local clock than the wire OR the electronics along the way.

its a dac problem, not a wire problem, that's what I'm saying.

Quote:

Now, digital is a concept. The signal is 0 or 1. But in the real world, it takes time to move between 0 and 1 (or 1 and 0). In fact, digital signals are really analog in the real world.


I'm with you so far.

Quote:

You magnify the picture, and you see what I am saying. In fact, one deals with running currents, charging capacitors, dealing with inductance, circuit resistance, complex semiconductors characteristics… The threshold takes effect on a signal between 0 and 1, on the slop we call rise time or fall time. You change the rise time, the DC or the threshold itself and the detection time is effected. Thus you are at a mercy of many variables, from temperature effects to ground loop currents, from interference to termination tolerance at an end of a long cable.


if I have a slowly moving (gradual slope, not a vertical-like line) that sounds EASIER to detect when a threshold is crossed than when one is moving too fast. if I slowly count to ten and tell you to say 'stop' when I hit 5, will it be easier for you to 'follow my wave' when I count slowly or quickly? that's my point.


Quote:

The amplitude of reflection in a well terminated system may be small, but the reflections are not in sync with the data transmission, they happen at a different rate (depending on the cable length and to lesser degree on the insulating material properties – dielectric constant). Such interference does impact the “detection time” of a digital transition.


this is why I was believing the notion that 'too short' a cable could have collisions with the bit value and its reflection. I'm wondering if its easier to sort out bits if they collide with a 'neighbor bit's reflection' than its own reflection. that's what the paper is suggesting, I think.

Quote:

And note that when the cable is short, the refection amplitude is small. So short is good. Such is always the case.


hang on; I thought reflection amplitude (like SWR) was more related to how 'uneven' the cable was and not its length!

remembering back from my ham radio days (lol) you had swr waves on the line if the line wasn't 50ohm all along. if you DID have a good clean line, the longer you made it did NOT 'hurt' the swr one bit. you lost power as length went on but swr was due to imbalance of the Z and not length.

btw, thanks for replying
wink.gif
 
Jun 18, 2009 at 7:00 PM Post #70 of 109
Quote:

Originally Posted by Dan Lavry /img/forum/go_quote.gif
...Also, you said: “as for the distance and reflections, I do believe that 'too short' could have timing issues with data and reflections. but I'm not convinced the amplitude of the reflections compard to the actual signal is enough to cause harm in decoding data/clock from the stream. in digital, you kind of can get away with a lot of evil - up to when you can't
wink.gif
but if you're under the threshold, as long as you can get clock and data and pick the bits out, you're fine.”

I do not know why you chose to believe that too short may have timing issues. This subject is not about believing, likes, dislikes or personal opinions, it is about facts dictated to us by Mother Nature. The fact is: the shorter the wire the LESS timing issues. The problem is with too long, not with too short.



Well, I think his use of the term «believe» is just an honest expression of human fallibility. No one knows anything with 100% certainty, at least not in the realms of audio theories and techniques. All you can do is gain a certain conviction based on personal experience, experiments and second-hand opinions. But you'll always find contradicting opinions from other (equally knowledgeable) people which may or may not change your mind.


Quote:

The amplitude of reflection in a well terminated system may be small, but the reflections are not in sync with the data transmission, they happen at a different rate (depending on the cable length and to a lesser degree on the insulating material properties – dielectric constant). Such interference does impact the “detection time” of a digital transition.
And note that when the cable is short, the reflection amplitude is small. So short is good. Such is always the case.


Why is the reflection amplitude larger in a longer cable?


I picked some opinions that contradict yours:

"Longer cables sound better. The reflection is delayed so not to coincide with the leading edge on the data transisition. Pick a cable length with a propagation delay of about about a quarter of the logic rise time and you will find out how bad a short cable can sound."

"When a transition is launched into the transmission line, it takes a period of time to propagate or transit to the other end. This propagation time is somewhat slower than the speed of light, usually around 2 nanoseconds per foot, but can be longer. When the transition reaches the end of the transmission line (in the DAC), a reflection can occur that propagates back to the driver in the transport. Small reflections can occur in even well matched systems. When the reflection reaches the driver, it can again be reflected back towards the DAC. This ping-pong effect can sustain itself for several bounces depending on the losses in the cable. It is not unusual to see 3 to 5 of these reflections before they finally decay away. So, how does this affect the jitter? When the first reflection comes back to the DAC, if the transition already in process at the receiver has not completed, the reflection voltage will superimpose itself on the transition voltage, causing the transition to shift in time. The DAC will sample the transition in this time-shifted state and there you have jitter.
If the rise-time is 25 nanoseconds and the cable length is 3 feet, then the propagation time is about 6 nanoseconds. Once the transition has arrived at the receiver, the reflection propagates back to the driver (6 nanoseconds) and then the driver reflects this back to the receiver (6 nanoseconds) = 12 nanoseconds). So, as seen at the receiver, 12 nanoseconds after the 25 nanosecond transition started, we have a reflection superimposing on the transition. This is right about the time that the receiver will try to sample the transition, right around 0 volts DC. Not good. Now if the cable had been 1.5 metres, the reflection would have arrived 18 nanoseconds after the 25 nanosecond transition started at the receiver. This is much better because the receiver has likely already sampled the transition by this time."

I'm not informed enough about the subject and electronics in general to draw my own conclusions. So what's your take on these statements?


Quote:

It just so happened that the ear is very sensitive, and it takes very little timing error to become a sonic distortion.


That's interesting. So digital cables can cause audible signal alterations depending on the design?
.
 
Jun 18, 2009 at 7:16 PM Post #71 of 109
ah, I just thought of another bit of possibly relevant data.

my day job is computer networking. I first learned on 'thickwire' (as DEC used to call it) which is 10base5 ethernet. 10meg bits/sec with a real (!) collision domain.

because of the idea of collisions on an ethernet and the fact that there are min cable lengths in the spec, perhaps -that- is another reason why I do believe there could be something to the 'too short' theory. if you think of a reflection as similar to another ethernet station randomly starting to transmit, the collsion causes a data error. fortunately, in ethernet they both detect this (and there was a JAM signal to extend it, too) and they both back off a random (diff) time and one gets to 'win' - in realtime spdif you have no such luxuries
wink.gif
so you -have- to ensure that no reflection would be 'loud' enough to interfere with the decoding of that bit at the receiver side.

is there any tests (anyone know?) that show when a transmitter sent data, that a receiver mistook the bit? surely those that say 'cables matter' would have demonstrated this with some kind of test config and even error injectors?

anyway, in the old 'hub' based ethernet days, collisions did happen and having too short a cable -was- a violation of the spec. too short from end station to repeater hub and too short from hub to hub or hub to bridge and so on.

spdif and ethernet share a lot of similar concepts, if you look at it. datacomm is datacomm, at the phy level, for quite a lot of schools
wink.gif
 
Jun 18, 2009 at 7:27 PM Post #72 of 109
Quote:

Originally Posted by Dan Lavry /img/forum/go_quote.gif
The amplitude of reflection in a well terminated system may be small, but the reflections are not in sync with the data transmission, they happen at a different rate (depending on the cable length and to lesser degree on the insulating material properties – dielectric constant). Such interference does impact the “detection time” of a digital transition.
And note that when the cable is short, the refection amplitude is small. So short is good. Such is always the case.



I'm trying to understand that last part. if a reflection is the wave traveling forward, hitting the remote end point and then a portion returning - then isn't 'longer' going to give a progressively weaker reflection?

I'm also trying to remember from my old ham radio days (30+ yrs ago, literally) that even if we got the swr down to a reasonable level, we would still like to have our transmission lines be in multiples of 1/2 wavelength (or something like that). I wonder why no one talks about 'magic lengths' of cable, then?
wink.gif


or maybe I just invented a new boutique market? lol! for 44.1, use this length or multiple. for 48k, use this. for 96, use this.

'god knows' what to use for dd5.1 or dts.

lol?
 
Jun 18, 2009 at 10:22 PM Post #73 of 109
Quote:

Originally Posted by linuxworks /img/forum/go_quote.gif
I'm trying to understand that last part. if a reflection is the wave traveling forward, hitting the remote end point and then a portion returning - then isn't 'longer' going to give a progressively weaker reflection?

I'm also trying to remember from my old ham radio days (30+ yrs ago, literally) that even if we got the swr down to a reasonable level, we would still like to have our transmission lines be in multiples of 1/2 wavelength (or something like that). I wonder why no one talks about 'magic lengths' of cable, then?
wink.gif


or maybe I just invented a new boutique market? lol! for 44.1, use this length or multiple. for 48k, use this. for 96, use this.

'god knows' what to use for dd5.1 or dts.

lol?



Hi again,

You said a lot of things and I do not have time to answer them all. I think you missed some of my points. Separate the format from the hardware. Then XLR is a 3 pin connector so it can accommodate 2 wires and a shield.

An RCA can not accommodate 2 wires and a shield. Also, yes, a transformer will, to a large extent convert a single ended signal to a balanced signal, but it is not perfect, especially when the wire itself is not balanced. So the differential receiver is an ADDITIONAL measure. In fact, the AES standard calls for one transformer, but the AES/EBU calls for 2 transformers (one at each end), and on top of it there is a BALANCED DIFFERENTIAL RECEIVER. Also, note that a shield should be connected only at the driver end, not at the receiver end.

Regarding reflections:

We are talking about digital signals. So lets just take a step, say 0V to 1V. When the driver is “looking” at the cable, it does not see the end, only the input side. The driver “sees” an impedance. Initially the cable looks like a resistor but impedance is not a resistor, you can not measure it with a DC ohm meter. (The impedance is a characteristic that is due to cable capacitance and cable inductance). So a sudden rise of 1V step divided by 100 Ohm means we are sending 10mA wave into the cable.

That wave front propagates at around 1/3 to 2/3 the speed of light. The speed is determined by the one over the square root of the dielectric constant of the insulating material (such as polypropylene or similar). Given that most of the used materials have a dielectric constant range of 3-6, say we take 4, then square root of 4 is 2 and the wave speed is around half the speed of light. Just figure around 1.5 nano second per foot (pretty close value to most cables).

Now, the 10mA current wave front is moving" towards the cable end. There are 3 cases here:

1. The cable end is terminated by 100 Ohms (same value as the cable impedance). Given that Ohms law always applies, when the current wave get to the termination, the voltage developed on the termination is 1 V, because 10mA times 100 Ohms equals 1V. Sounds simple enough until you read point 2 and 3:

2. Say the termination is 200 Ohms. Now we have 10mA times 200 Ohms, and the voltage step is too large. In fact say the cable is opened (no termination). In such extreme case, the step will double to 2V. That is not a stable condition, because there is now a 2V at the end and only 1V at the beginning of the cable. So a current wave is now going backwards. What will happen when that wave gets back to the driver? It depends on what termination (source impedance) it will see. In most cases the source impedance is much lower then the cable impedance and that will cause an inverted reflection…. Such as in point 3:

3. The case where the termination is lower then the cable impedance: The 10mA wave sees a lower resistor say 80 Ohms. The voltage step is too low. Forget 80Ohms, lets go for broke - a shorted end. In such a case, that 10mA current step must go back. The voltage step is 100% reflected… The driver will not "know" that the end of the cable is shorted until twice the propagation delay of the cable. It sends a step, and all looks fine until the wave goes all the way to the end and all the way back. The electrons are not any smarter then we are, they must get there to see what is going on :)

The original first step does not lose amplitude until it goes to the end. At the end there is a reflection coefficient which can be calculated by (ZL-Z0)/(ZL+Z0). ZL is load (termination), and Z0 is the cable impedance. For example, a 100Ohm cable and 80 Ohm termination yield (80-100)/(80+100) = -.11 thus a negative reflection of 11%. When the termination is higher then the cable, the reflection coefficient is positive.

Of course, the first reflection, be it positive or negative eventually gets back to the source, and that same formula applies. You now have a reflection of a reflection. Typically, the reflection at the source is close to -1 because the source is low. Say a 10Ohm source and 100Ohm cable then you have (10-100)/(10+110) = -82% reflection. The second reflection is now traveling to the end, where it will make for a reduced third reflection and so on…

Note that the round trip time is twice the cable propagation time length. For example, 100 feet length, and 1.5nsec per foot, the reflections are spaced by 300nsec. That rate has noting to do with sample rate or anything other then the cable.

It is most important to realize that we are talking about a step. This is the opposite case to DC. At DC, there are no reflections and cable impedance is a non issue. Let’s raise the voltage very slowly from 0V to 1V, say over 1 minute. Is that almost DC? Yes, we do not worry about reflections. But what about say 10nsec “sudden step”? Well, rule of thumb is to pay attention to reflections when the rise time of the step is shorter then the cable propagation time (thus length).

An audio signal, say 1V at 20KHz, has a reasonably slow slop. The would cycle is 50usec. Say a 10usec rise is still very slow with respect to say 100 feet (only 300nsec).

The alternative way to look at it all is with standing waves. A distance much lower then the wave length is different then a distance that contains many wave length. I prefer the reflection coefficient view for digital analysis.

The point is: When dealing with slower signals such as audio analog, we do not think about reflections, so we do not terminate our analog cables at 75Ohms or so. Typically we terminate at 10Kohms or higher. There are no cables with impedance that high, in fact there is no 300Ohm cable around, they all range in the 30-180Ohms and that is at the extreme cases…

But for faster changing signals, such as 10nsec digital audio steps, when the cable length is long enough, we need to terminate properly to avoid reflections. At 1 foot, we have 3nsec round trip which is still shorter then 10nsec rise. At 10 feet, we have a 30nsec round trip which is 3 times longer then the rise time, thus reflections count and termination matters.

I can not write such long post often, I hope it helped. Short cable is the way to go. Don’t let anyone tell you otherwise.

Regards
Dan Lavry
Lavry Engineering
 
Jun 18, 2009 at 11:08 PM Post #74 of 109
Jun 19, 2009 at 4:03 AM Post #75 of 109
Quote:

Originally Posted by Dan Lavry /img/forum/go_quote.gif
<snip> Short cable is the way to go. Don’t let anyone tell you otherwise.

Regards
Dan Lavry
Lavry Engineering



Thanks for taking the time for in depth responses. As for cable length, I know Jocko would disagree, so I guess there are differing opinions on that topic.
 

Users who are viewing this thread

Back
Top