Before people take my posts the wrong way, what I meant by "threshold" in the context of my post was not the auditory threshold. Well, not exactly.
What I meant is this:
If there's a specification to which a cable must conform (say have a certain value of LCR at a particular frequency and voltage input and a bunch of other parameters) what most seem to suggest is that if a reasonably made cable can adhere to these specifications at the stated conditions, the cable is deemed to be functional. If the specification calls for a particular tolerance, say X +/- Y microH for example, as long as a cable can keep within these limits, it is functional. Let's call this cable "A".
At the same time, what most seem to be saying also is that if another cable (let's call it "B" since I have lots of imagination) can perform to within X +/-G microH, it will sound identical to cable "A" as long as G < or = to Y.
In effect, the spec +/- Y is like a threshold beyond which no audible difference will result at the ears of the listener. Effectively that the cable spec tolerances for a threshold below the human auditory threshold. Perhaps I used the same word too many times and I must apologize for any confusion.
What I question (foolishly according to most of you guys) and it's really the essence of all that I've been blabbing about is whether or not the difference between G and Y matters in ways that produce audible results. I propose that it does. Perhaps as a result of the design choices of input/output stages or s/pdif implementation and so on and so forth. A lot of people it seems are of the view that the specifications are suitable and the differences between wires cause inconsequential changes to their variations from the spec. Or something like that.
*Please note that I am simply clarifying what I'm saying and not dragging up anything in an attempt to troll.
It's been a few days now and no response from manufacturers. Some of you might enjoy this fact.
Freedom of speech...........
Edited by SircussMouse - 4/13/14 at 9:17am