gregorio
Headphoneus Supremus
- Joined
- Feb 14, 2008
- Posts
- 6,910
- Likes
- 4,138
[1] I think there is a fundamental misunderstanding here. A headphone matched to your hrtf and without phase lag/lead artefacts will not project a soundstage of its own. It'll take the soundstage of the recorded content. If the content has a map relating to space as your brain deciphers, you'll be able to perceive. In my opinion that is what I can call "accuracy".
[2] I'd like to be placed at the position of the mic.
[3] Comparing space with stereo recordings is futile, not because you can't perceive space but because you can make multiple ways of creating space and all but one will be accurate to the recording ambience.
I think we're broadly in agreement here, although there are a couple of points worth clarifying:
1. The content effectively does (with a binaural recording) have "a map relating to space as" a brain deciphers, the danger though is asserting "your brain" or "you'll be able to perceive", because different people have significantly different brains/perceptions. Some people report that certain standard stereo recordings on headphones sound "just like being there" (even with height information), for others a relatively simple crossfeed is enough, a generic HRTF is enough for others, while a more comprehensive/personalized HRTF would be required by others and the addition of head-tracking would encompass even more people. By the time we get to the last one, we've probably accounted for the vast majority of people but probably still not everyone.
2. Mmmm, possibly but if so, that's your personal preference and it might not even be true for you. Many audiophiles have asserted the same/similar but it's not applicable generally and often not applicable even to those making the assertion (although I obviously don't know if that's the case with you personally). Because:
3. With the exception of binaural recordings, virtually no recordings are ever accurate to the recording space, intentionally. You cannot be "placed at the position of the mic" for the vast majority or orchestral recordings for example, because the mic position is about 30 or more different positions many meters apart and this is true of both 2 channel stereo and surround recordings. And with the vast majority of popular music recordings, there never was a "recording ambience" but an artificial conglomeration of different ambiances, some/many of them generated by processors.
[1] I have went through that video long ago and unfortunately It didn't answer my specific question.
[2] Regarding the visualizations using instruments, they can be deceptive.
[2a] I am looking for the pure math that deals with this. Like the link I posted above.
[3] And I don't see phase plots in any of his visualizations, and didn't see signals that can have phase deviations between different frequency components.
1. Then I must have misunderstood you specific question, I thought you were disputing the ability of PCM to represent timing differences less than the sample period?
2. They can be deceptive but not in this case, the output to the oscilloscope is conclusive, and I don't see how the example you can try for yourself could be deceptive.
2a. It's your choice to only accept "pure math" rather than objective measurements or practical experiments but that's a very specialist area. You'd probably need to look into the literature for certain DSP programmers/developers. Hydrogen Audio is about as far down that path as I personally have ventured and you can find several threads relating to this topic, which include some math and MatLab example/explanations. Just found this one but I recall seeing several over the years.
3. Again, starting around 20:50 on the video it's not just a visualisation but a visualisation plus the proof at the output (with an oscilloscope) that temporal resolution exceeds the sample period.
G
Last edited: