Quote:
If you were to control for those variables in constructing a DBT, how would you go about it? Maybe incorporate the switchbox at the transducer.
What's the total distortion and timing error that these variables may introduce? Are we talking less than a tenth of a db and less than a microsecond? Earlier on you discussed evidence of fine human discernment of interaural differences but admitted that there was no evidence that this had a specific effect on audibility.
Now you're ignoring a whole different field of testing. Faux AB tests have been conducted without any actual change beside presentation and the style of instruction. I'm not speaking strictly of DBT cable tests, but of the verified body of knowledge on the power of suggestion. So on the one hand we have evidence that the human mind is very picky about how it might interpret the exact same stimulus based on a variety of "mindsets" (beliefs, priming, physiological states like fatigue). On the other hand we have DBT tests (which you appear to discount outright) that fail to show positive results past well-established thresholds.
Having been to at least one concert in my life I can attest to the fact that drugs are some of the most commonly used audio tweaks and their effects are greater than those of any cable. Unfortunately I am having trouble finding any published DBT results and will have to ask the government for a grant to fund supplies for my own study.
Greisinger did some great stuff where he measured and provided information on how humans violate the "pan pot" law. Basically, if an image is placed off center during mixdown, one would expect all of the image to be in the exact same location in space, a nice tight spread as it were. David provides a plot showing measured results where the placement of the image is frequency dependent. In other words, a frequency dependent horizontal smearing of the image. He uses a third reference speaker to do this work.
My biggest concerns are:
1. Other than Davids work, why has this effect not been spoken of in any literature, any testing?
2. It is of course speaker dependent, I dont think I need to list those variables.
3. The mixdown is system dependent. I don't think anybody believes that the monitor setup in a studio is the same as the target system.
The most important issue to me is:
4. Mixdown does not use interchannel temporal shifts. Intensity only. This is not how humans evolved. We use both parametrics to discern image location.
Stereo is an artificial construct of sound wavefronts being used to fool humans into thinking a source is at a specific location. Most testing is not designed to worry about that, but rather, just toss a whole lot of synthetic environment at us and say, can you hear a difference?
Greisinger at least contrasted the difference between one speaker as a source imaged correctly by a human, and the synthetic field produced by two speakers. IOW, what he did was create a true reference image, then had the subject compare the virtual location of the synthetic image with a real one.
This conversion from a real image to a synthetic one requires humans adapt to the different stimulus construct. How long does this take humans, is it different for each human, is it learnable, does it depend on history.
Think of those computer generated 3-D images. To view the image, you have to maintain focus at the plane of the paper, yet rotate the eyes as if you are looking at an object far away. You are defying the natural mechanism of "point and look". Breaking the automatic connection between angular positioning of the eyes and constriction of the lens. We do it in a real environment without thinking, but can force ourselves to break the link. And, we can clearly tell how long it takes to get the image, even practice to do it faster..
Listening to stereo is the exact same thing, but I do not believe we can control it. We do not measure it. We do not acknowledge it exists. We do not produce program material consistent with how we hear.
Given all of this uncontrolled stuff, I personally cannot embrace a claim that simple listening/comparison tests are rigorous enough that the results of the test are statistically robust such that they can be applied to the general population as a predictor of outcome..
The interchannel timing we can discern is 1.5 uSec minimum (measured). Within a complex soundfield, I do not know what the number is, I would expect 5uSec as a reasonable number.
Interchannel intensity, I've seen nothing with respect to image stability, but certainly could not rule out .1dB either.
Both variables, I can only provide reasonable guesses for. Nobody I'm aware of has takent the trouble to measure.
I've not ignored any field of testing. I understand expectation bias, sighted bias, wallet bias, and even bias bias..
Personally, I wanted the gov't to fund
my personal audio testing, specifically the rigorous discernment of synthetic image placement with martini's being the controlled factor.
They did not respond to my repeated funding proposals...go figure.
If every test being done produces a null, but every test ignores human responses, does that mean they are all valid?
I do not claim everything makes a difference, I question the validity of the tests. My basic thinking is,
always question.
I understand that proponents of
everything makes a difference could possibly use my argument to bolster their stance even if their stance is inaccurate. But should I not question because it might "aid and abet the enemy"?? (quite a few have hit me for that)
Sorry for the length..
jnjn