Willx
New Head-Fier
- Joined
- Nov 3, 2011
- Posts
- 16
- Likes
- 10
trying to make sense of the charts...
Originally Posted by soundeffect /img/forum/go_quote.gif
That is an interesting file especially the one that sounds like crickets chirping. I guess I'm not made to hear real detailed stuff or high frequency as I barely if not at all can tell the difference on the one with some tunes. then again, my max is at 16k.
Sound localization works by the detection of an interaural time difference of a common source to arrive at the two ears (in the pons, not auditory cortex). Humans can discern sources that arrive as little as 10-20 µs apart (Hafter ER, Dye RH, Gilkey RH (1979). So to preserve all the spatial information of a musical performance the brain would require time resolution of at least 50 kHz. This is not 'heard' in pitch, but in location.
Humans can discern sources that arrive as little as 10-20 µs apart (Hafter ER, Dye RH, Gilkey RH (1979). So to preserve all the spatial information of a musical performance the brain would require time resolution of at least 50 kHz. This is not 'heard' in pitch, but in location.
Sound localization works by the detection of an interaural time difference of a common source to arrive at the two ears (in the pons, not auditory cortex). Humans can discern sources that arrive as little as 10-20 µs apart (Hafter ER, Dye RH, Gilkey RH (1979). So to preserve all the spatial information of a musical performance the brain would require time resolution of at least 50 kHz. This is not 'heard' in pitch, but in location.
Probably. But just to clarify, that doesn't necessarily mean to me that we can hear or need information at frequencies higher or equal to 50kHz or 100kHz. Accoriding to this paper, Hafner ER, Dye RH, Gilkey RH (1979) seem to be dealing with Interaural Phase Differences (IPD) in > 150 ms duration audible frequency signals.
I'm not an expert in interaural cues, but this paper describes some of the concepts. It mentions that the Just-Noticeable Difference (JND) for Interarual Time Difference (ITD) seems to be indeed 10-20 us. However, same paper mentions that ITD is estimated "from the phase of the ratio of the complex transfer functions for the right and left ears." And "ITDs are generally assumed to be relatively unimportant above 2kHz" which is obviously quite a few cycles below 50 kHz...
As with all technical journals, I would need to re-read it a few times to digest the content (specially since this is all new to me.) However, I wouldn't be too quick to assume that just because we may be able to discern sources 10-20 us apart, it is because we can hear 50 kHz (20 us) or 100 kHz (10 us)... Specially given that there is plenty of evidence that we humans can't.
Given the above, I don't think Hafter ER, Dye RH, GIlkey RH (1979) necessarily claim in their "Lateralization of Tonal Signals which have neither onsets nor offsets" that we need 50 kHz or 100 kHz of music signal bandwidth (requiring above 200 kHz sampling rate) to be able to discern sources that arrive as little as 10-20 us apart.
Seems to me 20 kHz of pristine audio signal bandwidth is up to the task, and not contradictory to the findings of your 1979 source.
Sample rate does not affect the "resolution" of delays that can be applied to a signal. It really only limits the maximum frequency that can be encoded.
You can see an impulse delayed by 1/3 and 2/3 sample below:
True, the interaural time difference is calculated from the phase difference of same-frequency signals arriving at the two ears. This activity comes from neurons in corresponding tonotopic areas of the cochlea, which are only sensitive to frequencies not much higher than 20 kHz.
Originally Posted by eucariote /img/forum/go_quote.gif
But as noted before, the audible difference in phase can be as little as 10-20 microseconds. So ~2x more detailed information about the time of arriving sound is important.
I'm a little unclear on your example. The phase of the oscillation is indeed the important information, but in your example 4 samples are used to encode the different phase shifts of the signal. But the smaller amplitude frequencies appear to use the minimum Niquist rate of 2 samples per oscillation. How is phase outside of the minimum sample times encoded in this signal? Here the sample rate does seem to affect the resolution of delays.
Originally Posted by ultrabike /img/forum/go_quote.gif
stv014 continuous time bandlimited impulse plots seem to have a little different amplitudes and oscillation behavior probably because one needs to sample above the Nyquist rate to get the same results regardless of sampling phase offset.
I'm a little unclear on your example. The phase of the oscillation is indeed the important information, but in your example 4 samples are used to encode the different phase shifts of the signal. But the smaller amplitude frequencies appear to use the minimum Niquist rate of 2 samples per oscillation. How is phase outside of the minimum sample times encoded in this signal? Here the sample rate does seem to affect the resolution of delays.
Originally Posted by jcx /img/forum/go_quote.gif
one crude way to visualize the phase resolution of digital audio is to consider how many distinct lines you can draw between adjacent samples - just "connecting the dots" - we can draw 32768 lines of the same slope of a 1/2 max full scale step (not exactly a legal signal but gives the outside number for "phase resolution") - that would be ~23 us / 32768 ~ 700 ps ( pico 10^-12 )