gregorio
Headphoneus Supremus
- Joined
- Feb 14, 2008
- Posts
- 6,834
- Likes
- 4,085
If we add pre-ringing to that recorded bell sound, we will alter its original envelope, which begins rather sharply, into one that ramps up more gradually.
I personally don't know how much that would affect how our brain interprets the location of the "real bell" - but maybe we need to find out.
Therefore, it's not unreasonable to wonder if, while your brain is identifying the pitch you're hearing by using your ear like a spectrum analyzer...
another section has already tentatively identified the approximate location of the sound based on the beginning edges of the sound envelopes.
(Which might suggest that altering the shape of those envelope leading edges might affect that part of the result.)
For example, we could ask people to locate a bunch of objects in the sound field, and rate their accuracy ("point to where it sounds like the violin is coming from").
We could then ask them to repeat the test with test samples recorded at various sample rates and see if their accuracy is the same for each - or not.
An interesting and not at all unreasonable point. However, it's an invalid point and there are several reasons for me stating it's invalid:
1. I agree that pre-ringing effectively changes the envelope but for that pre-ringing to have any effect, it must be audible/detectable. Let's say hypothetically that it may not be consciously audible but is detectable, in terms of the brain's interpretation of location. If the location of the bell is not where I want it to be, I (as a mix engineer) can simply change the location, if it's not where I expect it to be, I would typically investigate why. Never have I found pre-ringing to be the cause of a bell (or any other sound) not being where I expect it to be.
2. Rather ironically, the 1997 Theiss study I mentioned previously (Phantom source perception in 24bit @ 96kHz digital audio) set out to test exactly what you are suggesting. As I mentioned, there was a supplemental test on general perceived sound quality performed under less formal circumstances and it's this test which is frequently quoted by those who have a hi-res agenda but the main experiments were formal DBX tests designed specifically to test localisation and resulted in the conclusion that: "Analyses of the data showed that the hypothesis that localization accuracy improves with higher sampling rates above the professional 48kHz standard has to be rejected". (I linked to the paper above so you can read the details for yourself.)
3. As is frequently the case, the actual reality of the behaviour of sound and the practical realities of recording it are ignored. Very rarely (and pretty much never for a commercial music release) would a single violin be recorded with a single microphone placed a few inches from the instrument. The transients and frequency content of instruments are however typically/always measured and quoted this way. What an instrument actually sounds like from such close proximity is different, often vastly different, from what is expected and what would be heard by the audience. We've got absorption and reflections to consider, which results in very significantly different transients, freq content and dynamic range from what we would measure just a few inches away. This is with with actual live acoustic sound, if in addition we factor in mic response, timing differences between mics and more than one violin, we've got transients smeared all over the place and by all over the place I'm talking tens of milli-seconds up to seconds, not the few micro-seconds which can be detected with test signal! And, this is assuming any transients still even exist from a listening position in the audience and in many/most cases they won't! All this applies to any instrument/sound, even a snare drum rimshot, although with a rimshot there would typically still be a transient, just a very time smeared and different transient from the one created.
One of the difficulties facing us from the position of hobbyists, is the available information. The research and knowledge which covers most of our hobby is typically not led by independent scientific research, it's led by industry and as such is often proprietary and not available. For example, the start of digital audio is arguably the Nyquist Theory in 1924 which actually belonged to AT&T but they allowed it to be published as a scientific paper. However, most of the testing, data and research is not published science and even when there is independent scientific research, it's often/sometimes lagging many years behind industry research and sometimes lacking crucial factors. Then there's people like me, who actually use the results of that industry research day in and day out. For example, I studied, critically compared, then bought and was using greater than 16 bit technology every day, a good 8 years before >16bit even became available to consumers. Another example, the k-weighted filter used in loudness normalisation was the result of a lot of rigorous testing (perceptual DBTs) by the ITU's members, such as the BBC, ORTF and many, many others, but none of that research is published anywhere as far as I'm aware and the ITU specifications which resulted from them have since been modified, after people like me used them every day and discovered the deficiencies/loopholes. On top of this, while some front line companies are effectively unbiased, some industry organisations represent a membership which includes powerful manufacturers and distributors and are not in practice always entirely unbiased, the AES being an example. And finally, as you have mentioned, there is often great financial incentive to fund and publish research which demonstrates a positive result (that hi-res provides a tangible benefit for example) but relatively little or none at all to demonstrate a negative. All of this results in a knowledge landscape which is often extremely difficult to navigate and therefore relatively easy to abuse!
G
Last edited: