What is soundstage?
Jun 23, 2007 at 2:19 AM Thread Starter Post #1 of 45

edb

100+ Head-Fier
Joined
Apr 14, 2006
Posts
125
Likes
34
Many people are saying AKG K701 has wide soundstage? But what does soundstage mean? Is it just stereo separation?
 
Jun 23, 2007 at 2:29 AM Post #3 of 45
Quote:

Originally Posted by nikongod /img/forum/go_quote.gif
the darth nut review of the stax omega 2 will clear things up.

that review describes soundstage pretty well.



do you happen to have the link?
Thanks.
 
Jun 23, 2007 at 3:06 AM Post #6 of 45
DOUBLE POST.
 
Jun 23, 2007 at 3:08 AM Post #7 of 45
Sometimes there's a subtle bell, whistle, ring in a song.Confused, you look up to see if the ring came from the telephone across the room. That's soundstage.

Basically it's akin to having a 3d surround system in a pair of phones.
 
Jun 23, 2007 at 3:09 AM Post #8 of 45
Quote:

Originally Posted by dissembled /img/forum/go_quote.gif
Sometimes there's a subtle bell, whistle, ring in a song.Confused, you look up to see if the ring came from the telephone across the room.

Basically it's kind of having a 3d surround system in a pair of phones.



Ability to percieve depth and location through two speakers...

Wide soundstage means the sounds are placed over a larger area and are easier seperate than a narrow soundstage same with depth of soundstage
 
Jun 23, 2007 at 3:11 AM Post #9 of 45
Quote:

Originally Posted by nikongod /img/forum/go_quote.gif
the darth nut review of the stax omega 2 will clear things up.

that review describes soundstage pretty well.



That review is definitive. He articulates the difference between headstage and soundstage so eloquently. I love his analogy of taking a picture of a mountain landscape. The actual size of the photograph is the headstage, but the illusion of size and shape of the actual landscape itself is soundstage.
 
Jun 23, 2007 at 4:40 AM Post #10 of 45
Should have done a quick wiki search:
Quote:

Soundstage also refers to the depth and richness of an audio recording (usually referring to the playback process). According to audiophiles, the quality of the playback is very much dependent on how one is able to pick out different instruments, voices, vocal parts, etc. exactly where they are located on an imaginary 2D or 3D field. This can enhance not only the listener's involvement in the recording but also their overall perception of the stage.


 
Jun 23, 2007 at 6:40 AM Post #11 of 45
Quote:

Originally Posted by nikongod /img/forum/go_quote.gif
the darth nut review of the stax omega 2 will clear things up.

that review describes soundstage pretty well.



Maybe but it's extraordinarily verbose.

I started a thread some while back giving my opinion that soundstage is more a function of source than headphones per se.

http://www.head-fi.org/forums/showthread.php?t=207710


My introduction was :

"Let me start with how I would define soundstage: "A perception of source instruments/voices etc. in a specific spatial location; either a realistic spatial setting (such as a concert hall) or an unrealistic but pleasing artificial spatial environment ( such as with a good multi-miked/mixed studio recording where the spatial acoustic has no real relationship to a specific physical setting.)

To my mind, "soundstage" is primarily about stereophony. Thus if you listen with headphones to a stereo recording with a "good" soundstage and collapse the stereo image by turning the amp/pre-amp to a mono setting, voila! no left-right separation and no soundstage since all the sounds are now located in the center of the head. This soundstage collapse will happen with any headphone, no matter what its cost, amplification or associated equipment. And surely no-one would want to talk about a monaural signal giving a "soundstage." Certainly that would be pretty meaningless to me.

Soundstage is primarily determined by factors which keep the two stereo signals separate and intact and and this primarily implicates characteristics of the supporting equipment which contribute to channel separation, such as interchannel distortions, electrical crossfeed, phase anomalies and the like."


There were some good responses to that thread. Over time, I think I can see that certain qualities of phones interact with the source material to enhance soundstage. Thus if you have a reduced midbass frequency response you will lose a lot of ambience cues and you will not experience a convincing ambient soundfield. However I still maintain it is more a function of the source and is mostly secondarily influenced by phones.
 
Jun 23, 2007 at 7:30 AM Post #12 of 45
The source's/amp's involvement in the soundstage is mostly limited to preserving the seperation between stereo channels and instruments. It's the headphone's job to simulate a particular HRTF and that is where most of the perceptions of size, location, come from. The soundstage depends on how well the channels have been seperated, the type of chamber used in the headphone, and how the reverberation of the chamber interacts with the FR of the headphone, but most important of all is how well all of that matches the HRTF of the individual listening. Diffuse-field equalised headphones like the DT-990 Pro for example are designed to simulate the hearing of an average ear/head in a closed space.

In theory if you knew your HRTF and the FR of your headphones/earphones/IEMs, you could create the ultimate soundstage for yourself by processing the signal with the required EQ/crossfeed/delay. It wouldn't be perfect but it would be better than any headphone can do without processing. I know there are some systems that can do this but haven't tried any myself.
 
Jun 25, 2007 at 4:54 AM Post #13 of 45
Quote:

Originally Posted by b0dhi /img/forum/go_quote.gif
It's the headphone's job to simulate a particular HRTF and that is where most of the perceptions of size, location, come from. The soundstage depends on how well the channels have been seperated, the type of chamber used in the headphone, and how the reverberation of the chamber interacts with the FR of the headphone, but most important of all is how well all of that matches the HRTF of the individual listening. Diffuse-field equalised headphones like the DT-990 Pro for example are designed to simulate the hearing of an average ear/head in a closed space.

In theory if you knew your HRTF and the FR of your headphones/earphones/IEMs, you could create the ultimate soundstage for yourself by processing the signal with the required EQ/crossfeed/delay. It wouldn't be perfect but it would be better than any headphone can do without processing. I know there are some systems that can do this but haven't tried any myself.





Stereo auditory localization is defined by three main parameters, time difference between the ears, amplitude differences between the ears and phase differences. Of these amplitude differences are the key cue in ordinary stereo, the other cues are poorly preserved in most recordings unless they are "dummy head" binaural recordings, and there aren't many of them.

HRTF is a concept prone to misuse when it is presented as a general technique to analyse auditory localization. It's primary purpose is to anaylyse the effect of the pinna and structures, external to the ear drum on sound characteristics. It is fair to say that with enough HRTF analysis you could probably describe the the stereo image, however the above analysis of interaural differences is the far simpler conceptually and easier to work with.

http://en.wikipedia.org/wiki/Head-re...nsfer_function

HTRF deals with the effect of the pinna and other physical structures on sound quality and probably localization as well. However consider that you can have a substantial stereo image without any pinna at all, think of IEM's. I personally think such systems sound somewhat unrealistic because of the lack of pinna involvement, but they will nevertheless give a accurate directional localization and a "soundstage." It's just somewhat different than with a circumauaral phone.

The extent of HTRF "simulation" by headphones seems pretty limited. It eould be nice if they did something like this but other than by allowing the pinna to interact with the sound from the drivers in a realistic fashion, (which a few large phones allow) I don't see how th phones actually "simulate" anything.

Crossfeed is a technique to reduce channel separation and thus essentially monauralize a stereo signal. ( There are also a variety of proprietary crossfeed circuits which are all over the map in terms of what they do to signals, often changing the frequency response as well as . )

At maximum crossfeed, you will of course have a monaural signal, and no soundstage at all! It seems to be favored by those who really prefer listening to speakers and want their phones to have all the lack of separation and tendency to monauralization of speakers. In addition speakers produce phantom channels i.e. left signal feeding the right ear and vice versa which are complete artifacts and essentially distortion. Headphones will at least give you a clear left and right signal not messed up in this way.
 
Jun 25, 2007 at 5:58 AM Post #15 of 45
Quote:

Originally Posted by edstrelow /img/forum/go_quote.gif
HTRF deals with the effect of the pinna and other physical structures on sound quality and probably localization as well. However consider that you can have a substantial stereo image without any pinna at all, think of IEM's. I personally think such systems sound somewhat unrealistic because of the lack of pinna involvement, but they will nevertheless give a accurate directional localization and a "soundstage." It's just somewhat different than with a circumauaral phone.

The extent of HTRF "simulation" by headphones seems pretty limited. It eould be nice if they did something like this but other than by allowing the pinna to interact with the sound from the drivers in a realistic fashion, (which a few large phones allow) I don't see how th phones actually "simulate" anything.



You're right that the outer ear isn't even necessary for a localised stereo image, and that the utility of HRTF emulation and the other factors I mentioned are limited in headphones, but it is still a great deal of what differentiates soundstages in HPs.

The FR and chamber design of a headphone is the main reason some have large, deep soundstages and others have narrow ones. Ofcourse, the interaural time difference is the primary mechanism used in localisation, but that's generally the responsibility of the source material, not the headphones, and a soundstage is more than localisation.

Quote:

... when sound is echoed by large structures in the environment (such as walls and ceiling). Such echoes provide reasonable cues to the distance of a sound source, in particular because the strength of echoes does not depend on the distance of the source, while the strength of the sound that arrives directly from the sound source becomes weaker with distance. As a result, the ratio of direct-to-echo strength alters the quality of the sound in such a way to which humans are sensitive. In this way consistent, although not very accurate, distance judgments are possible. This method generally fails outdoors, due to a lack of echoes. Still, there are a number of outdoor environments that also generate strong, discrete echoes, such as mountains. On the other hand, distance evaluation outdoors is largely based on the received timbre of sound: short soundwaves (high-pitched sounds) die out sooner, due to their relatively smaller kinetic energy, and thus distant sounds appear duller than normal (lacking in treble).


http://en.wikipedia.org/wiki/Sound_l...#Distance_cues

While that article mentions outdoor cues, the mechanism is essentially the same indoors, except with reverb used instead of echoes. Thus, it tends to be the case that a HP with those timbre qualities and corresponding reverb within the chamber, such as the HD650 for example, has the effect of being perceived as a larger diffuse soundstage. When you compare it to a HP with very little chamber reverberation and no high-energy roll-off, such as the Prestige line of Grados, which advertise their non-resonant chambers and are known for more aggressive highs, the HRTF resembles free-field equalisation more than diffuse-field, and no illusion of diffusion and distance is created, thus the soundstage appears more collapsed and very close to the head.

Ofcourse, these tricks are only approximations of real-world hearing, and also depend on how well they match one's own hearing, and therefore work better for some people than others.

Quote:

Originally Posted by edstrelow /img/forum/go_quote.gif
Crossfeed is a technique to reduce channel separation and thus essentially monauralize a stereo signal. ( There are also a variety of proprietary crossfeed circuits which are all over the map in terms of what they do to signals, often changing the frequency response as well as . )


The frequency response you're mentioning here is usually modelled on the HRTF, but usually considering the head as a whole and not the ear itself. This is because the head itself absorbs some sound.
 

Users who are viewing this thread

Back
Top