Head-Fi.org › Forums › Equipment Forums › Sound Science › What parameters should be measured to quantify differences between cable acoustic properties.
New Posts  All Forums:Forum Nav:

What parameters should be measured to quantify differences between cable acoustic properties.

post #1 of 37
Thread Starter 
Lets say we have two analogue RCA cables and we wish to measure which is best, an el cheapo and a platinum special.

We play an identical sound or piece of music on the same reference system in an anechoic chamber and switch the cables over.

Then we compare signals recorded through a microphone at a sensible listening position.

There are infinite parameters that can be measured to compare the small differences between the two signals, but only a relative few would be relevant to the boundaries of perception of the human ear.

Which of these parameters should be measured to quantify differences between cable acoustic properties and what is the limit of human perception for these quantities?

Even if just a few significant measurable parameters could be identified it would be useful.

This would have implications for all arguments about differences between cables (and other equipment) as differences could be measured and quantified rather than be left to subjective opinion.

Pls excuse my ignorance if this has already been done.
post #2 of 37
this should prove interesting

post #3 of 37
I have more trust in the human ear (my own) than electric measurement, but this may be interesting anyway.

post #4 of 37
For digital cables, jitter (at the DAC output). I don't know what amount is audible, but anyway, cables often cause no measurable jitter at the DAC output.

For interconnects, background noise (especially at 50/60 Hz), and frequency response (the amplifier output should be much better than a microphone. Measuring directly the cable output can affect strongly its background noise). I think that background noise at 50 Hz is not audible below -70 dB. Measurments give rather -85 to -115 dB.
Frequency response audibility is given by this chart : ABX Amplitude vs. Frequency Matching Criteria
Interconnects are way below this threshold. Their maximum deviation is around 0.02 dB.

For speaker cables, frequency response too (comparing the cable intput to the cable output). It depends strongly on the speaker. Actually, only electrostatic speakers with impedance falling down to 1 ohm cause audible frequency response problems from the cable.
Cables causing amplifier clipping are a problem on the amplifier's side.

In conclusion, there is no measurment that can be correlated with the sound of cables. However, you can notice that the audibility thresholds are defined with blind testing, while cables sound the same in blind testing.
So if you want to assess the "non blind" audibility of cables, you have to rely on the "non blind" audibility of technical parameters, then you end with the same mix-up between people that pretend to hear 0.000000001 % of THD as clear as night and day and people who pretend that 5 % is not audible in music, and all you have done is complicate the debate even more, adding another topic for endless discussions on top of cables : audibility of distorsion.
post #5 of 37
Thread Starter 
Quote:
Originally Posted by krmathis View Post
I have more trust in the human ear (my own) than electric measurement, but this may be interesting anyway.

If your ear can hear a difference then the two recordings made MUST be different.

In terms of analysis of the two recorded signals what parameters (and what magnitude of difference) are you hearing that give you the perception of a different sound PS I changed my opening thread to account for analogue signals only to narrow the discussion.
post #6 of 37
Thread Starter 
Quote:
Originally Posted by Pio2001 View Post
So if you want to assess the "non blind" audibility of cables, you have to rely on the "non blind" audibility of technical parameters, then you end with the same mix-up between people that pretend to hear 0.000000001 % of THD as clear as night and day and people who pretend that 5 % is not audible in music, and all you have done is complicate the debate even more, adding another topic for endless discussions on top of cables : audibility of distorsion.
The potential for this experiment is can end these endless 'subjective' discussions by creating a set of measurable parameters and benchmark values.

Not sure if I should make apologies for this but I am an engineer by trade and trust hard numbers and analysis more than subjective opinions. Once significant paramaters are found THEN peoples opinions can be come involved, eg That differences in significant paramaters can be studied with listeners in laboratory conditions to come up with benchmark values.
post #7 of 37
Quote:
Originally Posted by Shark_Jump View Post
There are infinite parameters that can be measured to compare the small differences between the two signals, but only a relative few would be relevant to the boundaries of perception of the human ear.
Sorry???? How do you know that?
Quote:
Originally Posted by Shark_Jump View Post
Which of these parameters should be measured to quantify differences between cable acoustic properties and what is the limit of human perception for these quantities?
At least you're asking the right question.
I've asked this question a few times, but never got an answer.
Nobody seems to know what parameters are relevant for our auditive perception (and how they affect it).
post #8 of 37
Quote:
Originally Posted by Kees View Post
Nobody seems to know what parameters are relevant for our auditive perception (and how they affect it).
It depends on what you call auditive perception.

If it is blind listening tests, everything is well known : speakers and room make 98 % of the sound, through frequency response, reverberation and distorsion. Amplifier (if transistored, powerful enough and not clipping) and speaker cables make 1% through frequency response, and CD player 1 % through aliasing or some kind of distorsions or frequency response.

If it is normal listening, reduce all the above to 50%, and introduce 50% of psychology, which has no direct relation with the sound itself.

Oh, I forgot : add 50% more for the CD recording quality
post #9 of 37
Quote:
Originally Posted by Shark_Jump View Post
What parameters should be measured to quantify differences between cable acoustic properties?
That's a question nobody can answer. There's a cable-measuring test going on in the «cables» forum, with no significant result so far -- and I haven't seen a comparison which would show any kind of (universally accepted) significance. So the verdict is easy: Cables make no audible differences. At least to those who haven't heard them.

This thread could just as well be closed -- the question has been asked multiple times before, logically with no definitive answer, but a lot of personal attacks instead.

As much as I'm interested to see any data on the subject, I have more or less given up on it. It's not that cables don't show measuring differences, but they are tiny and don't provide a hint which of them could cause which audible characteristic. Nevertheless to me cables are a reliable tool for fine-tuning my system. I could just decide they don't do anything, the differences are imagined -- but what would I gain by doing so? The effects are absolutely persistent, so it's as good as if they were real.

Of course I'm joking a bit, because I still think they are real. No failed DBT would convince me of the opposite (I've passed a blind headphone-cable test). During my speaker-builder «career» I've done a lot of non-blinded comparisons between different crossover-network tuning variants. The measuring differences with the components were minuscule (some of them within less than 1‰ -- let's say 4.782 vs. 4.778 μF). It wouldn't have been practicable to do them blinded, let alone DBT -- imagine 20 or more different tunings within 1 hour. Anyway, according to the hegemonic objectivist philosophy they have been useless. Personally I don't think so. And I suppose that most speaker builders use the same tuning method -- with success.
.
post #10 of 37
Thread Starter 
Quote:
Originally Posted by JaZZ View Post
That's a question nobody can answer. There's a cable-measuring test going on in the «cables» forum, with no significant result so far -- and I haven't seen a comparison which would show any kind of (universally accepted) significance. So the verdict is easy: Cables make no audible differences. At least to those who haven't heard them.

This thread could just as well be closed -- the question has been asked multiple times before, logically with no definitive answer, but a lot of personal attacks instead.

As much as I'm interested to see any data on the subject, I have more or less given up on it. It's not that cables don't show measuring differences, but they are tiny and don't provide a hint which of them could cause which audible characteristic. Nevertheless to me cables are a reliable tool for fine-tuning my system. I could just decide they don't do anything, the differences are imagined -- but what would I gain by doing so? The effects are absolutely persistent, so it's as good as if they were real.

Of course I'm joking a bit, because I still think they are real. No failed DBT would convince me of the opposite (I've passed a blind headphone-cable test). During my speaker-builder «career» I've done a lot of non-blinded comparisons between different crossover-network tuning variants. The measuring differences with the components were minuscule (some of them within less than 1‰ -- let's say 4.782 vs. 4.778 μF). It wouldn't have been practicable to do them blinded, let alone DBT -- imagine 20 or more different tunings within 1 hour. Anyway, according to the hegemonic objectivist philosophy they have been useless. Personally I don't think so. And I suppose that most speaker builders use the same tuning method -- with success.
.
I will have a look at the thread you mention. Thanks SJ

I am not asking for personal opions or subjective beliefs (RE 'personal attacks'), so much as ways to measure and compare differences in the recorded sound (frequency vs time), (frequency vs phase) etc. Its really just a theoretical lab experiment and nothing for anyone to get too worked up about.

Sound or can I call it frequency analysis in this instance, can be broken into amplitude, frequency, and (when comparing signals) phase. These are the physical definition of what is picked up by the ear, no more and no less.

The sort of physical differences that could cause different perceptions of sound amongst millions of others would be noise (as mentioned before), phase and wave shape differences (caused by inductance/capacitance effects). We need to find the significant ones.

If key parameters could be found that affected Hi Fi listening quality the results could be used for testing other types of audio equipment.
post #11 of 37
Thread Starter 
[QUOTE=Kees;5475419]Sorry???? How do you know that?

Total conjecture ;-). However I was thinking along the lines of diminishing returns. There would be a few significant ones and an infinite more with progressivly less importance.

If not then this currently theoretical experiment is stuffed from the start! But you don't know unless you try eh!
post #12 of 37
Good question, I've suggested something similar in the past. What I'd like to see is a setup where two recordings are made. One with an aftermarket cable and one with a control cable. Record the system being played with each, then lay the waveforms over each other.

I'm not an engineer by trade, but I find human perception utterly unreliable. Having been a criminal defense attorney, I learned that circumstantial evidence is best. That's against folk wisdom, but evidence bears it out. A fingerprint on a window doesn't forget, lie, change its mind, or become uncertain because it was drunk at the time. It doesn't tell you everything, but it is what it is without the human uncertainty.

There was an interesting episode when I took a class on evidence. During one session, a visitor briefly interrupted the class to ask where the library was. The professor told him and he wandered off. At the end of the class, the professor asked us to write down what the guy was wearing, then invited him back into the room. It had been a setup to teach us a lesson on perception and the value of eyewitnesses. There had been about 40 law students in the class, it was an unstressed environment, everyone was sober and rested, and everyone had a good look at the guy.

Not one of us remembered what the guy was wearing. He hadn't changed and came back in the room with the same outfit.

I know this isn't directly applicable to audio, but it illustrates just how unreliable humans are. Myself included - I got the outfit completely wrong, too. But something circumstantial, like a security video, would have been dead on.

It goes to show why we have to rely on measurements made with machines. That's counterintuitive and abhorrent to many, but that is the only reliable way to truly know something. If tests can be repeated by others, using different equipment, it lends more credibility.

This is why I want to see comparative recordings overlaid on each other. If something is truly going on, it will show up there. If nothing shows up, then Occam's Razor clearly points to human error.
post #13 of 37
Erik: Well, psychology teaches that humans have a better memory for shocking experiences. Not the devil's advocate, but just a thought.
post #14 of 37
Quote:
Originally Posted by Uncle Erik View Post
This is why I want to see comparative recordings overlaid on each other. If something is truly going on, it will show up there. If nothing shows up, then Occam's Razor clearly points to human error.
There will always be significant differences between two recordings, even without changing the cable. 16 bits recordings can detect volume variations less than 1/100000, which surely occur when the components temperature drift. Electromagnetic noises from lights, computers, switches etc will also likely be recorded. And, way above all this, without atomic clocks for playback as well as recording, time drift will lead to very high level differences because the cancellation of the two recordings won't be properly synchronized.
Synchronizing the recordings require asynchronous sample rate conversion, which should affect the sound quality more than interconnects.

The influence of the cables will be very difficult to sort out.

And in the case of no significant differences (shown by statistical analysis over many recordings, for example), it shall always be blamed on the poor quality of the recording device. It is likely that people will hear the cable influence during playback, but not in the recording.

The most direct way to investigate is to compare blind and non-blind listening : if you hear a difference, try in a blind test. If you still hear it, fine. That would be an extremely interesting result.
If you fail the test, then, you can begin the most interesting investigation of all : to find why the test fails, comparing the listening conditions, and looking for the exact parameter in the test setup that make the difference disappear (if you don't hear it during the test), or change (you hear another one during the test, not correlated with the cable).
Narrowing the test conditions, you can proceed until there is only one alternative : the test succeeds reliably, or the knowledge of the cable is the only factor left that produces the audible différence, working even when you think the cable is on, while it isn't.
post #15 of 37
Thread Starter 
Quote:
Originally Posted by Pio2001 View Post
There will always be significant differences between two recordings, even without changing the cable. 16 bits recordings can detect volume variations less than 1/100000, which surely occur when the components temperature drift. Electromagnetic noises from lights, computers, switches etc will also likely be recorded. And, way above all this, without atomic clocks for playback as well as recording, time drift will lead to very high level differences because the cancellation of the two recordings won't be properly synchronized.
Synchronizing the recordings require asynchronous sample rate conversion, which should affect the sound quality more than interconnects.

The influence of the cables will be very difficult to sort out.

And in the case of no significant differences (shown by statistical analysis over many recordings, for example), it shall always be blamed on the poor quality of the recording device. It is likely that people will hear the cable influence during playback, but not in the recording.

The most direct way to investigate is to compare blind and non-blind listening : if you hear a difference, try in a blind test. If you still hear it, fine. That would be an extremely interesting result.
If you fail the test, then, you can begin the most interesting investigation of all : to find why the test fails, comparing the listening conditions, and looking for the exact parameter in the test setup that make the difference disappear (if you don't hear it during the test), or change (you hear another one during the test, not correlated with the cable).
Narrowing the test conditions, you can proceed until there is only one alternative : the test succeeds reliably, or the knowledge of the cable is the only factor left that produces the audible différence, working even when you think the cable is on, while it isn't.
Are you saying its beyond our capability to compare two electronic signals to a level above and beyond that of the human ear? Excuse me for asking but have you any working knowledge of sound and frequency analysis that would lead you to this conclusion?

Generally when you do a lab test you try and remove as much extraneous variations you can, for things like temperature you just need to turn things on an hour or so before you do the test to let the heat variations level out. A properly set up lab test should also remove other extraneous noise to insignificant levels using shielded anechoic chamber and removing ambient and electrical noise sources before the full test is done. It really is pretty straight foward stuff, any sound engineering student worth his salt could set it up if he had the correct equipment at his disposal.

Also any experimental variations you mention (temp drift etc) would be present in the real world, yet I have not heard anyone say 'my new platinum cables sound better when I take them out the fridge' (Don't tell me I bet there is already a thread on this!! :-) )
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Sound Science
Head-Fi.org › Forums › Equipment Forums › Sound Science › What parameters should be measured to quantify differences between cable acoustic properties.