Separate names with a comma.
Great post. You should put it on some wiki page or something.
This is an awesome thread.
Soundstage feels as what it is - soundstage - for the sake of simplicity we can even call it "surround feel".
But what it really is is I think - the achievement of shaping frequencies in "groups" or "separation" in combination with the volume and panning. The reason why you feel the guitar is "right upfront" from your head is cause it's panned mostly right, with a particular frequency range that convinces your brain to interpret where it's placed. You can lookup virtual barber shop stereo test on Youtube to test this out.
So I think a good soundstage is defined by a broad frequency range and good reproduction of the same, and how the listener interprets this (we have our unique ABC analog-to-brain-converter), along with the track signature (which itself contains many other blueprints in producer's mixing process) and so on. This is why soundstage can be subjective as well and experiences will differ with different listeners, tracks, sources e.g. the headphone signature may not suit the track ruining the soundstage as well, so soundstage itself not a standard headphone measurement that much, it's more of a RESULT of different elements in the chain (but some headphones are generally soundstage-rich).
That is one way of seeing it. Sure.
My problem with it, is that it gives no quantifiable sonic signature, it is rather vague, general, and emotional.
How would you describe this sonicly? What specific traits would showcase this?
How is that guitar "panned out", and how can a guitar be "panned mostly right, with a particular frequency range that convinces your brain to interpret where it's placed"? Are you not mixing sonic traits here?
Why would anything be sound staged by if it plays harmonic to you? What would not matter then for sound stage? Where is its limits? Where does it start, where does the term end?
To me, this is useless as a definition of the term, as it simply does not describe the term in any useful, nor quantifiable, manner. Actually, it seems more to represent more a description of the total sonic experience, than this particular trait.
"for the sake of simplicity we can even call it 'surround feel'." Yep. A feeling. As in emotion. But not analytic and reasonable. So as an reasonable analytic definition, this simply does not work.
But, hey. You might feel this way. I am not arguing that.
This is cause I think soundstage isn't real, it's an illusion made from all these elements - an experience. And considering our auditory system is a sense, we are talking about feelings here. What's useful here is to think of it more broader and see how many things influence what makes a soundstage, not just 1-10 rating per headphone.
Yeah. I just do not get the elements you mean make up this illusion of sound stage. Because, I think, that when you analyse it a bit, you will end up with every sonic trait imaginable. Seems more like you speak of the sensation of being at a concert, and the speakers reproduce that in front of you.
But breaking it into something limited and reasonable, is like impossible then. The term is close to borderless.
And no, it perfectly possible to break down this sonic illusion into parts, reasonable parts. The parts just needs to be quantifiable and reasonable. They must also be useful. Actually, I just did that a few posts ago.
One way to start looking at the problem is to consider the aspects of the sound arriving at our ears from speakers that change as the speakers move apart. frodeni already mentioned the differences in time delay and intensity that can happen. These are what crossfeed plugins try to compensate for. Another thing that changes is the frequency/spectral content of sound reaching our eardrum, since moving the speakers changes how the wavefront interacts with our body (ears, head, torso).
Consider this graph, which is a smoothed frequency plot of data from one of the public HRTF databases. The colors represent different position of speakers relative to the listener's head (0° is straight ahead, 90° is full left). The dB values use 1000kHz as a 0dB reference. What this shows is that as a set of speakers is rotated around the head, the peaks and valleys of the frequency response of an impulse change in location.
Individual responses deviate all over the place from the average. Still, we can make a hypothesis that certain aspects of what we call soundstage/headstage are related to how the design philosophy of a headphone matches up with aspects of these curves. Individual deviations within the range affected by the ears (>2kHz) might therefore account for occasional disagreement from what is considered the norm.
Just some things to think about. Like you said, Lettuce, it would be interesting to try and find those things that correlate with the perceived soundstage; here is one possibility.
There is no consensus as to what soundstage is. Just read this thread.
"Just some things to think about. Like you said, Lettuce, it would be interesting to try and find those things that correlate with the perceived soundstage; here is one possibility."
Just read my post.
I did read your post RRod. People do not see the need for a term to describe the scope of range of the sonic reproduction. They do not analyze the sound.
The hear the term soundstage, and starts babbling about what that should mean, not realizing it is the only term in use, that covers dept and width. They end up rather talking about feelings, opinions, and in all honesty: Almost any sonic trait at once, backed into this soundstage of theirs.
If so, we have no term for width and dept. The sonic scope in space.
I hope I am wrong, but your post did not seem to reflect mine, nor a lot of posts in this tread of late. Reflect the same understanding of soundstage. Thus, no consensus. Thus your statement is based on yet another definition.
I have prompted a definition of the term, and argued its term. I have quantified how it materializes. I have described the necessary terms for most to apply the definition. Just not in this tread, because there is not the needed structure in here. It is all a complete mess.
Nice to finally hear people speaking of hypothesis by the way. If you like that way of argument, we really need to try to pull in that direction together, instead of this petty argument going on here between us.
My main concern, is that we need to break things down into manageable pieces of sonic traits, and those need to be quantifiable, and thus describe able. They need to be well defined, and limited by nature. Not over lapping like nuts. And they need to offer us a language that we can describe the sound with accuracy and reason. A language for analytic reason, not this "the sound is intimate, almost to close and personal" craze.
Most sonic traits are not objective observable, not to my knowledge, but some are. To be objective observable, they must be sharply defined anyway.
But as I have said, the necessary structure for that, is not in here, as this is not a thread for such tight reason to begin with.
I do agree that fuzzy language gets science nowhere (though it seems to get headphone makers quite the $$$). But how is frequency response something that isn't quantifiable or describable?
To my knowledge, there is no test for soundstage. Not as it most commonly is used, as in width and dept. Unless high ress sampling, or new computing power has offered something radical new.
Frequency response is quantifiable, but translating it into soundstage measurment, well, that has proven hard for quite some time now. Like at least 20 years. Stereo interference is one issue in that regard.
When things are understood, and when the tech is there, I am pretty confident that most subjective data and objective data will melt. That has been the story so far. I simply have no issue with either.
Just my $0.02:
Sound stage is the virtual recreation of the recording space in width, depth and height by our brain processing the ambient information captured in the recording and translated back into sound by the transducers, either headphones or speakers. Consequently this can not be directly measured. Forget about that approach.
About the claim that 3D reproduction is impossible with just 2 speakers ... that is complete BS in my book. We have only two ears and are perfectly capable of locating sources of sound in real world, i.e. in 3D space. If in a recording a certain instrument with instructions "from far away" is playing from a back room of the concert hall or from a 3rd tier position in the hall, this will clearly be audible as not coming from the same sound stage plane (as in width and depth) as the rest of the orchestra. If it's not, it's time to upgrade ... poor wallet
Sit in front of two speakers that are right next to each other, then have friends move them apart for you. You'll hear a widening of the soundstage. This will be due to the change in how the wavefront interacts with the body. It's reasonable to ask what aspects of headphone design might lead to similar effects, though exact quantification might never be obtainable.
Totally agree about the second point. In fact, the technique of speaker cross-talk cancellation has exactly this task in mind.
There is no height in stereo. It is 2D, not 3D in stereo. How do claim a 3D signal is modulated to produce this 3D effect?
What happens, is that some speakers have the deeper soundings elements at the bottom, making some sound appear as coming from a lower place. That effects ruins the reproduction of the guitar for instance. For the same reasons, a lot of speakers have elements placed in vertical balance.
And what you describe as soundstage, is actually imaging, not soundstage. Given what you wrote above, what would imaging be?
We need something defining the scope of possible imaging, as in width and dept. If that is not soundstage, which it is most places outside Head-Fi, then what describes width and dept? Then we need a term to describe the accuracy sounds are reproduced by height and width. That is usually imaging. If not, we need a term for it. What would that be? And so on.
People need to realize what we need to describe is sonic traits, not what the words should mean by their face name. Going face names, you cannot analyse a sonic signature, as you have no language to do so with.
Soundstage as a reproduction of a stage with playing musicians, would be made by a ton of sonic traits. The same probably will happen for a thread of this kind, about imaging. Or body. Not to mention clarity. In the end, nothing is clear and distinct, and everything overlap like nuts.
No wonder why people struggle to both analyze and describe a sonic character.
I think there is a difference in soundstage with headphones. There is an easy test that anyone can do for this. You can easily affect the soundstage of open headphones by covering the earcups with your hands. you will hear a noticeable difference in width and depth of where the music is coming from. You can still image where the notes are coming from, but they are just closer than previously