Dec 21, 2010 at 8:40 PM Post #31 of 58


Quote:
... You criticise Innerspace for throwing  nonsense, then throw some about yourself...

 
 
for the record, I was pointing InnerSpace to VictoriaGuy's comment, not his.
 
I've to say Head-fi is the only hi-fi forum I frequent (due to disposable time). 
 
However, this is the cable forum; if someone ask about better cables, is only fair they get recommended different options. replies like, "don't bother is all snake oil" are not fair, and bordering on trolling, imo . it might be your belief, but doesn't make it necessarily true. I don't care if they buy expensive cables, but having tried different cables, and making a difference to me, I feel is only fair that they have the choice to try for themselves.
 
it's late, and wished to be less concise. I hope my point is clear.
 
Dec 21, 2010 at 9:02 PM Post #32 of 58


Quote:
Quote:
... You criticise Innerspace for throwing  nonsense, then throw some about yourself...

 
 
for the record, I was pointing InnerSpace to VictoriaGuy's comment, not his.
 
I've to say Head-fi is the only hi-fi forum I frequent (due to disposable time). 
 
However, this is the cable forum; if someone ask about better cables, is only fair they get recommended different options. replies like, "don't bother is all snake oil" are not fair, and bordering on trolling, imo . it might be your belief, but doesn't make it necessarily true. I don't care if they buy expensive cables, but having tried different cables, and making a difference to me, I feel is only fair that they have the choice to try for themselves.
 
it's late, and wished to be less concise. I hope my point is clear.


I think that is pretty accurate when it comes to digital cables at least.
 
There is almost no (immeasurable) EMI issues for short runs of cables (less than about 11m when the signals start degrading) and cable jitter can completely be recovered from using PLL and clock recovery on the DAC side of the cable.
 
Think about this for a second: you are talking about digital audio cables. This is an extremely low bandwidth application. Consider Gigabit Ethernet which is a much higher bandwidth application that uses extremely cheap wires and is 10x more bandwidth intensive - and still all bit errors are easily corrected using ECC on the receiver side.
 
I'm not asking you not to buy 24K gold cryogenic cables wrapped in moose fur - after all it helps the economy.
 
http://www.audioholics.com/education/cables/top-ten-signs-an-audio-cable-vendor-is-selling-you-snake-oil
 
Enjoy!
 
 
 
Dec 21, 2010 at 9:44 PM Post #33 of 58
What if you use a splitter to connect two sets of cables to your source. Then:
 
1) Blindfold yourself
 
2) Ask your friend to connect the left earcup to the left cable of cable A, adn the right earcup to the right cable of cable B
 
3) Listen
 
4a) Switch cable A to the right ear, and cable B to the left ear
--or
4b) Simply unplug and replug the cables back into the same earcups they were orignally from
 
5) Listen again
 
ASK: Can you tell whether the cables were switched?
 
Repeat the experiment about 100 times, then invite some more friends over, and do it several hundred times again. (SAMPLE SIZE!!) Report data.
 
Dec 22, 2010 at 11:06 AM Post #34 of 58

 
Quote:
Quote:
AV Review spent 6 months with $205,000 worth of equipment measuring 60 HDMI cables. They found that at certain lengths certain cables they would fail to transmit a signal that kept a clear 'eye' where the 1s and 0s were clearly represented. We already know that TVs can suffer from crackles, snow and lines and it was assumed that the failed cables would then be the ones to show such. But they did not. Two of the lowest performing cables had to be joined and were 65 feet long before such signal degredation appeared.
 
What happened was that a difference was found by measuring, but in actual use such a difference made no visual difference. The fail was going from theory to practice. You can measure all you want, but the final test is is it audible or visual.
 
Sighted and blind testing are indicators of whether that difference is audible. I prefer blind as it removes other causes and leaves the cable or whatever to perform on its own.
 
Measuring the performance of cables with a HATS in a sound proof room still needs to pass the theory to reality test.



It does.
 
The problem with DBT (or one of the issues) is that if you agree that music is complex, and also agree that DBT require time, then I have already done a non-peer reviewed study on how analyzing complex data under pressure and with limited time can not be done accurately. 
 
I dont have the thread any more but essentially what was done was a photo of a very complex painting was taken, and one of the colors was changed in several spots. Users were allowed unlimited time to say whether or not they were different and to please show where they differed. 
 
There was no one that got the question completely right (demonstrating where the differences were). Yet if the differences were circled, they were easily spotted. 
 
I believe that if someone spent a long time looking at them they would have eventually seen the difference as the differences were visible when shown.
 
So DBT isn't necessarily reliable either. Per myself, I would rather trust measurements taken by reliable sources and peer reviewed than spotty DBT results and subjective testament. 
 
Dave


I like your picture analogy, but I have a different take on it. Blind testing is to see if humans can identify a difference or not.
 
A - If the testers easily, quickly and accurately spot a difference, then there is a big difference.
 
B - If it takes time and the accuracy level falls then the difference is smaller.
 
C - If there is a difference, but hardly anyone gets it, then the difference is so small as to be insignificant.
 
D - If there is no difference and people fail to find a difference, then there is no difference.
 
E - If there is no difference and people report finding a difference, then the difference is no longer in the item being tested, the difference is now in the tester.
 
The latter is what I believe is the case with audiophiles and cables in that cables may look different and have different constructions, but when it comes to using your ears only, those differences vanish.
 
So with the picture analogy, the test showed C, though given a bit more time, maybe B. That does not make the test unreliable.
 
Dec 22, 2010 at 12:29 PM Post #35 of 58


Quote:
 
Quote:
Quote:
AV Review spent 6 months with $205,000 worth of equipment measuring 60 HDMI cables. They found that at certain lengths certain cables they would fail to transmit a signal that kept a clear 'eye' where the 1s and 0s were clearly represented. We already know that TVs can suffer from crackles, snow and lines and it was assumed that the failed cables would then be the ones to show such. But they did not. Two of the lowest performing cables had to be joined and were 65 feet long before such signal degredation appeared.
 
What happened was that a difference was found by measuring, but in actual use such a difference made no visual difference. The fail was going from theory to practice. You can measure all you want, but the final test is is it audible or visual.
 
Sighted and blind testing are indicators of whether that difference is audible. I prefer blind as it removes other causes and leaves the cable or whatever to perform on its own.
 
Measuring the performance of cables with a HATS in a sound proof room still needs to pass the theory to reality test.



It does.
 
The problem with DBT (or one of the issues) is that if you agree that music is complex, and also agree that DBT require time, then I have already done a non-peer reviewed study on how analyzing complex data under pressure and with limited time can not be done accurately. 
 
I dont have the thread any more but essentially what was done was a photo of a very complex painting was taken, and one of the colors was changed in several spots. Users were allowed unlimited time to say whether or not they were different and to please show where they differed. 
 
There was no one that got the question completely right (demonstrating where the differences were). Yet if the differences were circled, they were easily spotted. 
 
I believe that if someone spent a long time looking at them they would have eventually seen the difference as the differences were visible when shown.
 
So DBT isn't necessarily reliable either. Per myself, I would rather trust measurements taken by reliable sources and peer reviewed than spotty DBT results and subjective testament. 
 
Dave


I like your picture analogy, but I have a different take on it. Blind testing is to see if humans can identify a difference or not.
 
A - If the testers easily, quickly and accurately spot a difference, then there is a big difference.
 
B - If it takes time and the accuracy level falls then the difference is smaller.
 
C - If there is a difference, but hardly anyone gets it, then the difference is so small as to be insignificant.
 
D - If there is no difference and people fail to find a difference, then there is no difference.
 
E - If there is no difference and people report finding a difference, then the difference is no longer in the item being tested, the difference is now in the tester.
 
The latter is what I believe is the case with audiophiles and cables in that cables may look different and have different constructions, but when it comes to using your ears only, those differences vanish.
 
So with the picture analogy, the test showed C, though given a bit more time, maybe B. That does not make the test unreliable.


If the result should have been B, and the test showed C, than the test is unreliable in my opinion...
 
Dave
 
Dec 22, 2010 at 12:33 PM Post #36 of 58
I think "C - If there is a difference, but hardly anyone gets it, then the difference is so small as to be insignificant." becomes a value judgement. Some folks that think it's insignificant (even statistically) prefer the change over no difference. Some of us that think there is a difference probably make it sound as if it's night and day, black and white, a game changer. It probably wouldn't be for most folks, and most folks probably wouldn't notice the difference. 
 
For example, I just installed the Khozmo 48-step attenuator in my modified DNA Sonett last night. For me, the difference is "night and day," but I suspect that my wife would probably say, "That's nice honey." To her, it plays music in varying degrees of loudness. She did comment on the "clicking" noises the new attenuator makes. 
wink.gif

 
Dec 22, 2010 at 12:34 PM Post #37 of 58
Again I don't see that. The test has shown that the result is between B and C, where B can be achieved if someone is given a lot of time to try and spot the difference, but in most cases the difference is so small as to be insignificant. What is inaccurate about that?
 
EDIT - and based on the post above, if after some time and practice you can achieve B that is fine, as there is a difference. The real issue is identifying differences where there are none.
 
Dec 22, 2010 at 6:13 PM Post #38 of 58


Quote:
Quote:
 
Quote:
Quote:
AV Review spent 6 months with $205,000 worth of equipment measuring 60 HDMI cables. They found that at certain lengths certain cables they would fail to transmit a signal that kept a clear 'eye' where the 1s and 0s were clearly represented. We already know that TVs can suffer from crackles, snow and lines and it was assumed that the failed cables would then be the ones to show such. But they did not. Two of the lowest performing cables had to be joined and were 65 feet long before such signal degredation appeared.
 
What happened was that a difference was found by measuring, but in actual use such a difference made no visual difference. The fail was going from theory to practice. You can measure all you want, but the final test is is it audible or visual.
 
Sighted and blind testing are indicators of whether that difference is audible. I prefer blind as it removes other causes and leaves the cable or whatever to perform on its own.
 
Measuring the performance of cables with a HATS in a sound proof room still needs to pass the theory to reality test.



It does.
 
The problem with DBT (or one of the issues) is that if you agree that music is complex, and also agree that DBT require time, then I have already done a non-peer reviewed study on how analyzing complex data under pressure and with limited time can not be done accurately. 
 
I dont have the thread any more but essentially what was done was a photo of a very complex painting was taken, and one of the colors was changed in several spots. Users were allowed unlimited time to say whether or not they were different and to please show where they differed. 
 
There was no one that got the question completely right (demonstrating where the differences were). Yet if the differences were circled, they were easily spotted. 
 
I believe that if someone spent a long time looking at them they would have eventually seen the difference as the differences were visible when shown.
 
So DBT isn't necessarily reliable either. Per myself, I would rather trust measurements taken by reliable sources and peer reviewed than spotty DBT results and subjective testament. 
 
Dave


I like your picture analogy, but I have a different take on it. Blind testing is to see if humans can identify a difference or not.
 
A - If the testers easily, quickly and accurately spot a difference, then there is a big difference.
 
B - If it takes time and the accuracy level falls then the difference is smaller.
 
C - If there is a difference, but hardly anyone gets it, then the difference is so small as to be insignificant.
 
D - If there is no difference and people fail to find a difference, then there is no difference.
 
E - If there is no difference and people report finding a difference, then the difference is no longer in the item being tested, the difference is now in the tester.
 
The latter is what I believe is the case with audiophiles and cables in that cables may look different and have different constructions, but when it comes to using your ears only, those differences vanish.
 
So with the picture analogy, the test showed C, though given a bit more time, maybe B. That does not make the test unreliable.


If the result should have been B, and the test showed C, than the test is unreliable in my opinion...
 
Dave



But the test shouldn't have been B, it should have been exactly what it was, C. Because the test results aren't what you think they should be, doesn't make the test unreliable. . Its easy to pick something out when you know what you are looking for, not so much when you don't know what you are looking for. If you look at 2 different pictures, and cannot find the differences, it means the differences are insignificant. I see nothing that makes the test unreliable, as it came to a rational conclusion. 
 
Dec 22, 2010 at 6:43 PM Post #39 of 58

 
Quote:
Ah allow me to explain further. 
 
If you are not familiar with Plato's cave then allow me to paraphrase:
Three people are bound in a cave looking at shadows and cannot move their head. One of them is freed and sees the outside "Real"  world. He goes back and explains to the remaining men that what they are looking at isn't real. They laugh.
 
I brought that up because whether or not one can demonstrate differences electrically or otherwise, those in the cave may truly believe in what they hear.
 
If people "hear" differences between cables, it is not illogical to suggest that there may be a neurochemical response to spending a large sum of money that indeed makes the cable sound better? That in no way implies that the cable would reveal electrical differences in the audio band or not.

 
Yes. I agree. People can imagine they hear things they do not actually hear.
 
Dec 22, 2010 at 6:50 PM Post #40 of 58


Quote:
The problem with DBT (or one of the issues) is that if you agree that music is complex, and also agree that DBT require time, then I have already done a non-peer reviewed study on how analyzing complex data under pressure and with limited time can not be done accurately.   


Then cables cannot be evaluated in a store (time) or reviewed. No one here can make any claim about which cables sound better because no one can be accurate in a limited time.
 
Otherwise: we could just refer to any of the "take all the time you want" DBTs, or the "done for fun at a show" DBTs and find people who could identify cables / amps.
 
The fact that it's not rare: but rather completely non-existant is pretty telling.
 
Quote:
I dont have the thread any more but essentially what was done was a photo of a very complex painting was taken, and one of the colors was changed in several spots. Users were allowed unlimited time to say whether or not they were different and to please show where they differed. 
 
There was no one that got the question completely right (demonstrating where the differences were). Yet if the differences were circled, they were easily spotted. 

 
So if I cannot find Waldo then I cannot tell the difference between Matisse and Picasso in a DBT?
 
Dec 22, 2010 at 7:30 PM Post #41 of 58
I think the mind can literally invent positive results, even with a lack of actual scientific progress: Placebos help, even when patients know about them.

Blind testing by its very nature eliminates bias. That's the entire point of it. To claim that the tests are flawed because it takes time, effort, and "skill," to make comparisons just further convinces me that blind testing not only works but works well. If the differences are so small as to require that sort of effort to find and document, then I simply cannot justify spending so much money on such a negligible difference in performance. If that difference even exists other factors such as ambient noise, closed/open cans, as well as recording and mastering quality will have a far greater impact than cables ever can.
 
Dec 22, 2010 at 10:08 PM Post #42 of 58

Let me respond to each of you in kind as I will still play devils advocate:
Quote:
Again I don't see that. The test has shown that the result is between B and C, where B can be achieved if someone is given a lot of time to try and spot the difference, but in most cases the difference is so small as to be insignificant. What is inaccurate about that?
 
EDIT - and based on the post above, if after some time and practice you can achieve B that is fine, as there is a difference. The real issue is identifying differences where there are none.



The problem is not identifying differences when there are none, the problem we were discussing is the reliability of DBT. I agree DBT large scale over say a year could provide accurate results and show some statistical significance. No DBT has ever been done of this kind for the audio dilemma. The other problem is, many people interested in the study would not be able to dedicate the time and resources into completing it successfully. I personally would not trust a DBT with out adequate sample size, statistical significance, and time. 


Quote:
Quote:
Quote:
 
Quote:
Quote:
AV Review spent 6 months with $205,000 worth of equipment measuring 60 HDMI cables. They found that at certain lengths certain cables they would fail to transmit a signal that kept a clear 'eye' where the 1s and 0s were clearly represented. We already know that TVs can suffer from crackles, snow and lines and it was assumed that the failed cables would then be the ones to show such. But they did not. Two of the lowest performing cables had to be joined and were 65 feet long before such signal degredation appeared.
 
What happened was that a difference was found by measuring, but in actual use such a difference made no visual difference. The fail was going from theory to practice. You can measure all you want, but the final test is is it audible or visual.
 
Sighted and blind testing are indicators of whether that difference is audible. I prefer blind as it removes other causes and leaves the cable or whatever to perform on its own.
 
Measuring the performance of cables with a HATS in a sound proof room still needs to pass the theory to reality test.



It does.
 
The problem with DBT (or one of the issues) is that if you agree that music is complex, and also agree that DBT require time, then I have already done a non-peer reviewed study on how analyzing complex data under pressure and with limited time can not be done accurately. 
 
I dont have the thread any more but essentially what was done was a photo of a very complex painting was taken, and one of the colors was changed in several spots. Users were allowed unlimited time to say whether or not they were different and to please show where they differed. 
 
There was no one that got the question completely right (demonstrating where the differences were). Yet if the differences were circled, they were easily spotted. 
 
I believe that if someone spent a long time looking at them they would have eventually seen the difference as the differences were visible when shown.
 
So DBT isn't necessarily reliable either. Per myself, I would rather trust measurements taken by reliable sources and peer reviewed than spotty DBT results and subjective testament. 
 
Dave


I like your picture analogy, but I have a different take on it. Blind testing is to see if humans can identify a difference or not.
 
A - If the testers easily, quickly and accurately spot a difference, then there is a big difference.
 
B - If it takes time and the accuracy level falls then the difference is smaller.
 
C - If there is a difference, but hardly anyone gets it, then the difference is so small as to be insignificant.
 
D - If there is no difference and people fail to find a difference, then there is no difference.
 
E - If there is no difference and people report finding a difference, then the difference is no longer in the item being tested, the difference is now in the tester.
 
The latter is what I believe is the case with audiophiles and cables in that cables may look different and have different constructions, but when it comes to using your ears only, those differences vanish.
 
So with the picture analogy, the test showed C, though given a bit more time, maybe B. That does not make the test unreliable.


If the result should have been B, and the test showed C, than the test is unreliable in my opinion...
 
Dave



But the test shouldn't have been B, it should have been exactly what it was, C. Because the test results aren't what you think they should be, doesn't make the test unreliable. . Its easy to pick something out when you know what you are looking for, not so much when you don't know what you are looking for. If you look at 2 different pictures, and cannot find the differences, it means the differences are insignificant. I see nothing that makes the test unreliable, as it came to a rational conclusion. 



 
The "test" was never ran? We are arguing a hypothetical situation where the answer in reality was "B" but due to physical constraints it was answered as "C". If you look at two pictures and cannot see the differences yet are physically capable, then perhaps it is only that you require training to be able to spot the differences or simply needed to put more time into the endeavor. Why would this be different with cables?
 
Quote:
Quote:
The problem with DBT (or one of the issues) is that if you agree that music is complex, and also agree that DBT require time, then I have already done a non-peer reviewed study on how analyzing complex data under pressure and with limited time can not be done accurately.   


Then cables cannot be evaluated in a store (time) or reviewed. No one here can make any claim about which cables sound better because no one can be accurate in a limited time.
 
Otherwise: we could just refer to any of the "take all the time you want" DBTs, or the "done for fun at a show" DBTs and find people who could identify cables / amps.
 
The fact that it's not rare: but rather completely non-existant is pretty telling.
 
Quote:
I dont have the thread any more but essentially what was done was a photo of a very complex painting was taken, and one of the colors was changed in several spots. Users were allowed unlimited time to say whether or not they were different and to please show where they differed. 
 
There was no one that got the question completely right (demonstrating where the differences were). Yet if the differences were circled, they were easily spotted. 

 
So if I cannot find Waldo then I cannot tell the difference between Matisse and Picasso in a DBT?



As I said above, the issue with DBT is often time constraints. DBTs conducted within a few hours and usually under pressure of others will not provide a good environment for tests. It is much like having someone try to shoot skeet while being timed and holding a gun to their head and threatening them if they miss any shots. I doubt many great shooters who would rarely miss in practice would be able to successfully complete this task. There, to my knowledge, has never been a take all the time you want in your own environment DBT conducted for this. As I said often the listener will tire of testing or the other people required for the DBT will have to leave. After all, you can not conduct a DBT by yourself. Even then, one person is not a large enough sample size. To you it may be "pretty telling", maybe over a year someone else may find a difference or not. I say to each his own, as for me I try not to plague others with my own preconceptions. 
 
If you cannot tell the difference between a real Picasso and the same Picasso but with blotches of color added in a DBT in twenty minutes, you may need an hour, or a day, or a year. It would depend on if you are first physically able of seeing the colors, and assuming that you are, it is simply a matter of time isn't it? 
 
Dave
 
 
Dec 22, 2010 at 10:29 PM Post #43 of 58
If you cannot tell the difference between a real Picasso and the same Picasso but with blotches of color added in a DBT in twenty minutes, you may need an hour, or a day, or a year. It would depend on if you are first physically able of seeing the colors, and assuming that you are, it is simply a matter of time isn't it? 


By your own admission the differences in sound with different cables are so nil as to require literally days or even months of individual study under very tight controls that eliminate or diminish most other factors. By applying Occam's razor to your argument we can logically demonstrate that these cables do not offer the world-changing differences that are purported by some in the audiophile community, or by the companies that make them.

I contend there is also a divide between the inherent cultural value of maintaining priceless works of art, and defending the merits of conductive metal strands. If some people can't identify differences in the artwork the original is still of more value, if people can't identify the differences between two cables their value is then identical.

If you wish to provide list upon list of studies that show clear differences in cables to counter the results of dozens of DBTs which say otherwise, please do. These may not be totally rigorous studies, but in aggregate they still have more value than none at all.
 
Dec 22, 2010 at 10:40 PM Post #44 of 58


Quote:
Quote:
If you cannot tell the difference between a real Picasso and the same Picasso but with blotches of color added in a DBT in twenty minutes, you may need an hour, or a day, or a year. It would depend on if you are first physically able of seeing the colors, and assuming that you are, it is simply a matter of time isn't it? 




By your own admission the differences in sound with different cables are so nil as to require literally days or even months of individual study under very tight controls that eliminate or diminish most other factors. By applying Occam's razor to your argument we can logically demonstrate that these cables do not offer the world-changing differences that are purported by some in the audiophile community, or by the companies that make them.



Occam's razor would suggest nothing as there is no explanation with less assumptions. Cables making no difference assumes that capacitance, dielectric propagation velocity, conductance, inductance, impedance, RFI/EMI rejection, connector stability (in a physical sense), valence and conductance bands of intermolecular orbitals, eddies and eddy currents, and many other issues have no significant effect. Cables making a difference assumes that the differences are there but are not yet proven. In my opinion Occam would go in the making a difference camp. 
 
However, I don't think that anyone would suggest intentionally that a cable will change the entirety of your sound making ipod earbuds into Orpheus or the like. Companies are allowed to claim whatever their opinion is, but it is not their fault for people believing everything they read. I think there is a child book where the babysitter takes every single one of the written instructions literally and ends up in peril all the time.
 
(EDIT: I probably should share my thoughts on everyone, but I should instead say that from what I have seen around the forum, many posts suggesting that cables make a difference imply that it is a slight difference and should be done as the last upgrade). 
 
I am not necessarily stating that differences require days or months under strict controls, just that with subtle differences in complex data there may be more time required than DBTs have to offer. Also I am suggesting that pressure in DBT situations may cause errors in the study. As I said I think if there are differences, it may take a listener who is untrained (in no way am I stating that I am trained or anyone is trained to do this) time in a relaxed environment to find differences. 
 
To say that small differences that take time to find are not differences, as I found somewhat passively suggested in your text, is false. 
i.e. 1.00000001 is still not 1 and suggesting that it is 1 is false. 
 
If you are looking for my own admission: I have not given one. Mine looks something like this, to my ears and mind; cables make a small but nice difference. I will admit the differences are small, and to some may not be worth the investment, but then again I would recommend people upgrade everything else first and if they want to try cables. I am not necessarily entrenched as a cable believer for the rest of my life, but at this time, with the data from studies I have read and conducted, my current understanding of physics, transmissions, chemistry, and math, and what I "hear", I do believe cables make a difference.
 
Dave
 
Dec 22, 2010 at 10:58 PM Post #45 of 58
I can tell the difference between blue heaven cable and free copper cable that came with my speakers. the amp whent in to protection though. when i hooked up a different set of speakers to the same amp using the same blueheaven wires the change was noticable again, but something about the combination worked really well and the amp ran normally. you can argue all you want why, truth is that it happend.  so can speaker wires make a diference? in my system so much so they prevented the system functioning, wich was noticable cause the music stopped !!!
 

Users who are viewing this thread

Back
Top