Quote:
Mischa, Kiteki, and Ben, If I may approach the subject matter from a neuroscience perspective and hopefully to add some light into the subject of voodoo and frequency. First of all, in audio circle, it has been taken almost as universal truth that we cannot hear below 20 hertz hence the limit of 20-20khz exists. In fact, we know at 20 hertz or below it is more like a feel than sound. So it misled many to conclude that whatever we cannot hear has no use to us or it doesn't matter since we cannot process it. However, it has been proven time and again that the human brain can and has the ability to process and discern down to 0.1 hertz and that a change from or a difference of 0.1 hertz to 0.2 hertz can activate or evoke a different EEG or brain response pattern or emotional feel in a person. This has been published frequently by the works done in the field of EEG biofeedback or neurofeedback. In neurofeedback, if a person is trained to recognize the feel at 0.1 hertz and then the person is asked to respond to the limit of 0.2 hertz, the change is immediately recognizeable to the trained person. Our brain is that sensitive and probably more so than any existing measuring device or scope. Therefore when I frequently read about comments that said the difference of a few hertz doesn't matter when it has been proven that it can have an impact on our emotional response and that we can recognize the difference of even as small as 0.1 hertz anywhere in the frequency spectrum and beyond. Furthermore, I believe this also led to the overly used concept of placebo effect in our many arguments on the subject of cable difference or burn-in effect and others etc. The argument usually goes like since objectively we are not able to measure any "single dimensional" differences, therefore, if a person hears something difference between two cables or two bitrate, it must be placebo. Not that placebo effect doesn't exist, but scientifically before we attribute something to placebo we better realize the limitation of our measuring methods. And in this case science has proven that the brain can process a lot more than we can measure.
Second, an orchestra has somewhere around 50 to 80 instruments and with that it can make endless numbers of music without repeating itself. The human brain has over 10 trillion synaptic connections, and each inhibiting or dysinhibiting firing of each synaptic connection forms a unit of brain signature to an external or internal event. The combination of the firing of the trillion synaptic connections form the basis of our various emotions, knowing and consciousness. In fact, we have a separate brainwave signature or brainwave composition to a similar song that plays at 16/44 and 24/96. Or that we can discern the difference between the same songs that has a high noise level from that with a low noise level even though objectively they may measure the same. That is also why a mother of a monzygotic twins can tell the difference between the twin when scientifically and genetically they are the same. Or we can tell the difference between frozen orange juice and fresh squeeze orange even though their composition is the same. The link that Kiteki refers to show difference of pre-echo stage and I believe that alone will cause the brain to notice the difference on just that one factor. But music is a complex and multidimensional event to the brain. The brain is exceptional in its capability to notice minute difference even down to change below 0.1 percent. That is why we are able to tell abstract concept like house sound, sound stage, body, headroom, etc.
Sorry, but that's all a load of junk. Especially the bit about our ear/brain system being more sensitive than any measuring device.
How did you make the leap from "we perceive frequencies below 20 Hz" to "placebo can't automatically be attributed to x"? No one at all is arguing that we can't perceive frequencies below 20 Hz. You can literally feel the pressure changes and vibrations - it's just that 20 Hz or so is roughly where the pressure waves begin sounding as a tone. The problem is above 20 kHz - a very different situation. There is not one single study where test subjects have perceived in any way frequencies above their normal listening range (which perhaps for the absolute best ears is in the 23 kHz range) that cannot be attributed to intermodulation distortion or other distortion artifacts in the playback equipment (i.e. the Ooashi study). If you've got evidence to the contrary, I'd love to see it - including your claim that a different brainwave signature is apparent when listening to high resolution music (with the exact same mastering, matched (and normal listening, not elevated) levels, correct noise-shaped dithering, and no hardware-related artifacts different between the two sample rates and bit depths).
The problem isn't just automatically attributing differences that are heard to placebo - it's that these sort of differences completely disappear with properly conducted blind testing. It's trivial to measure a difference between all cables, DACs, amplifiers, bit depths/sampling rates, etc. with proper test equipment. Despite that, comparing such devices in blind testing, it has been found that in many cases people can't hear any difference at all when they don't know what they're listening to. By finding the limits of distortion, changes in frequency response, noise, etc. that can in fact be distinguished in blind testing, it is possible to induce that those limits can conservatively be applied to general situations - e.g. extrapolating that a given cable with appropriate RLC properties will almost certainly not be distinguishable from a counterpart in blind testing despite what a listener may say from sighted listening tests - without actually performing the blind test in every single case. This is the basis upon which science is founded - to dismiss such thinking in general is to dismiss the scientific process.
Similarly, your examples of two things being perceived as different though they measure the same objectively - flat out, there either is a difference and you're not measuring it, or you're pulling out magic as the explanation. The same song with a high noise level and a low noise level? If you can distinguish between the two, we can measure the difference. It's that simple - and measuring noise in a recording is trivial. A mother of identical twins? Is that really a serious example? Yes, they are genetically identical. No, they are not physically the same. Scars, marks, brain and body development, etc. differ between two twins - again, this is trivial to measure. Frozen versus fresh squeeze orange juice - again, not something that can't be done. Send the samples to a flavor science lab and have them analyze it - they'll be able to tell you what the difference is.
Sorry, but there's not a single thing that we can perceive about the world around us that we can't measure and quantify better than our senses can. No, that measuring cannot yet simulate exactly how we perceive things in every case - but it sure can detect changes of any sort far more sensitively than our own senses can.
Example, to go with the music theme? How about a recording of a symphony orchestra, so complex and full of many different sounds. Trained listeners can identify the frequency ranges which are most prominent, what sort of instruments are playing, how many of each (with more than a few of a given instrument playing each part, this would be a rough estimate), perhaps what soloist is playing, and maybe even what particular brand/model tympani or gong or whatever is being played. Every one of those differences could be measured, and with the right software (and samples serving as the equivalent of a listener's memory) interpreting it, you could quantify that as well. The typmani or the soloist? Its harmonics or characteristic style could be picked out and analyzed to determine what/who is playing. Etc., etc.
But what if you added some noise at -130 dB on your 24 bit recording. 130 dB down? You could measure it; easily. At normal listening volumes, you could not perceive it at all. Similarly, change one of the three third trumpet's instruments from a Bach to a Yamaha. Let's say the difference between the two is down at -110 dB, and that's the only difference in the recording at all. No listener could ever hope to hear that difference, but it's trivial to do a null test between the two to find the exact difference in the recording. Would you be able to nail down exactly what the difference is? If you had recordings of only the two different trumpets, almost certainly.
Abstract concepts? Not at all. They're merely words describing human perception of physical phenomenon that can be measured and quantified.
House sound? Merely the typical frequency response, distortion characteristics, and driver positioning of a given headphone company. Extend appropriately to other equipment manufacturers.
Sound stage? At least this one's a little difficult. There's tons of specific effects that are known to have an affect on soundstage and can be measured. The big one is reverb; which could be called the sum total of the soundstage - it's the sum of all the reflections of a sound in a space - everything but the direct sound traveling through the air (and floor/walls). How rooms behave is measured and modeled every single day as a part of modern acoustic design. Another effect is that of our ears' pinnae and other physical structures on perception of sound, including time delay between ears, frequency response (particularly ear canal resonance), phase response, etc. No, we can't identify every single aspect, and certainly not at will with ease, but we can measure the exact differences between recordings (or between two different headphones on a dummy head, etc.) and use null testing to show us the sum total of differences between the two. Are you familiar with the Smyth SVS Realizer? You should look it up - you'd be amazed at what modern acoustic modeling can do in a commercially available product.
Body? What does that term even mean? If it means a given characteristic "sound", that sound can be quantified relative to what a given listener says it is. You might say instruments of a given type sound "warm" compared to other instruments of another type playing the same note. It's trivial to measure them and other examples, find the differences in harmonics, and identify what attributes of the harmonic balance cause that sound. But if you can't consistently identify what "body" or "warm" or "bright" sounds like - and between untrained listeners that's a sure fact - you can't assign an objective sound attribute to the word. That's not a flaw with the measuring; it's a flaw with the labeling.
Headroom? A trivial concept that actually has nothing to do with what we hear but rather denoting that an amplifier has a certain maximum voltage/current output curve, which can be conveniently quantified for real-world use as the dB level in which the amplifier is capable of driving a given load (separately, peak and RMS) over what level the transducer is playing at. Say you're listening to headphone A at 75 dB RMS with peaks at 95 dB. Say that the amplifier is capable of maintaining 110 dB RMS and 115 dB RMS with that load. You have 35 dB of RMS headroom and 20 dB of peak headroom. Another way in which headroom is defined is the peak level that the amplifier is capable of minus the RMS level in decibels that the amplifier is capable. That is, a power amplifier capable of putting out 100 W into 8 ohms RMS but capable of 200 W peak would have 3 dB of headroom.