Quote:
Originally Posted by markmaxx /img/forum/go_quote.gif
Questions: the speaker V headphone natural presentation, argument..
I would have to ask isn't the point of stereo to help with imaging/sound-staging?
You hear, for example from headphones and speakers more volume from lets say a guitar, on the right speaker then the vocalist "same volume from both speaker" then on the left we might have most of the volume coming from a violin from the left speaker.
When we listen from headphones or speakers we assume the guitar is on the right of the vocalist and he or she is center stage and on the left of the vocalist is the violin. No?
To take that further the vocalist may walk from the right side of the stage to the left side so the volume of the singers voice from the right speaker is higher (sure some bleeds to the left but thats only natural right) then the sound of the vocals is transfered by volume ie. higher and higher volume levels appear from the left speaker and less and less volume levels of the vocalist from the right speaker. Our brain then interprets this and we get the impression that the vocalist has just walked across the stage yes?
This is why I was thinking of trying the K 1000 ear speakers.
|
This seems like a basically correct analysis of directionality to me. The K1000's or the Stax Sigmas for that matter don't do anything different in regard to the lateral localizing of sounds. What they do is help get the sound away from the head and in this sense they are more speaker-like.
However the amplitude differences between the ears are not the whole story. There are profound differences between the directional cues used in most commercial stereophony and those in normal hearing, with recorded stereo often relying more on amplitude differences to create images while normal directional hearing is based on time differences.
Amplitude differences between the ears are probably less important than time differences in a real life listening situation. This is in part because the head needs to "shadow" the sound before you get much amplitude difference between the ears. There are not going to be large amplitude differences for most sounds coming from the areas ahead or behind the head. Only sound sources well-off to the side are going to show much shadowing.
As well, there is less shadowing at longer wavelengths. This is one of reasons why it is often said that bass sounds are non-directional and is the basis of the claim you sometimes see about placing a subwoofer anywhere in a room.
I don't know the exact frequencies where bass directionality is lost but I have a fond memory of having this phenomenon explained on a larger scale by my old boss, who was a sonar engineer, from his balcony overlooking the ocean. Waves could be seen washing uninterrupted over the smaller rocks and miniature islands, but for the larger islands the waves were blocked and there was a "shadow" behind the island.
HOWEVER, recordings often rely more of amplitude differences than time differences to create a sense of spatial location because you can manipulate it more easily than time delays. Thus if you have one mic on one instrument you can increase its volume more on one channel than another and move it in apparent space by using a simple slider on your control panel. I am sure that with modern digital recording you can also manipulate time delays to modify stereo imaging, but I have not heard of anyone doing this.
There may still be interchannel time delays in stereo recordings especially those using minimalist miking techniques, but when you use an array of mikes these delays may either be messed up because you have several sets of different delays for the same sounds, or if you do multichannel recording with essentially monaural recordings of individual voices or instruments, the time delays would be non-existent. So you fall back on amplitude differences between the channels to create the auditory spatial image.
I have also seen some stereo microphone positionings where the mics are virtually in the same position but facing different directions, thus minimizing time delays as an effective cue but still allowing amplitude differences to be recorded.
Of course there you have another interesting comparison of headphones vs speakers. Speaker reproduction will not allow good localization of low frequencies because of the lack of head shadowing. However headphones, because they isolate the sound in the two ears, will give an amplitude difference even at low frequencies.
Stereo recording is a psycho-acoustic mish-mash but it works because the brain is able to make some sense of incomplete or even contradictory information. You need only think of the various proprietary sound enhancing techniques often used on boom boxes and the like to expand the spatial image. Anything which creates a timing, phase or amplitude difference between two channels may end up sounding like a spatial dimension.
I regard headphone listening as a more purist/minimalist way of hearing the spatial information in a two-channel recording. Loudspeakers compromise this because of the phantom channels discussed above and the lack of low frequency directionality.
Of course speakers do get the sound out of your head and you get to hear the sound in your own room with its own acoustics. That can be nice although sometimes in conflict with the acoustics in the recording.