Quote: however this is not the place to discuss this. I would argue that in general, in setting up any well run experiment, you should have an idea of the mechanisms contributing to the differences in your experimental measurements. Otherwise you won't have any idea what parameters you need to specifically control for and vary. You don't have to be able to fully explain exactly how or how much each factor affects the results, but you should be aware of all potential influences so as to control for them in the experiment. I am not implying that you personally did not fully account for all variables in your experiment, I am just responding to your comment. For instance, lets say (for the sake of arguement) that the only difference in cable performance (relevant to a typical audio user) is the ability to shield RFI in a certain frequency range typically experienced when routed near audio equiptment (maybe particular RFI ranges generated by amplifier power supplies). If you were setting up the experiment without considering what could possibly contribute to observed performance in the field (once again I am not implying that you did not), one may not specifically consider testing the cable response to the exact RFI intensity and frequency that it would experience in real usage. In such a case, the experiment did not accurately reflect real life usage conditions and therefore less conclusions can be drawn from the experiment. Coming from a scientific background, I know that very often the initial round of experiments do not fully control for all relevant variables. Then after discussion of the actual model to explain the results, it becomes apparent that there were relevant variables that were not fully controlled for. Therefore experimentalists and theorists should not operate independently.