Is sound stage size an artifact or on the recording?
Jun 20, 2022 at 5:55 PM Thread Starter Post #1 of 34

emlin

New Head-Fier
Joined
Nov 11, 2019
Posts
44
Likes
32
Location
Devon, UK
I'm a newbie here but have been looking around, so sorry if this is a stupid question.

When iems/headphones are described as having a large soundstage, does that mean that the iem/headphone is imparting said soundstage, or does it mean that the soundstage of the recording is being accurately rendered?

Thanks,
Pete
 
Last edited:
Jun 21, 2022 at 1:21 AM Post #2 of 34
Is sound stage size an artifact or on the recording?

It depends on where the mic is relative to the instrument or the amplifier+speaker through which the electronic instrument produces sound as well as how they're panned in either channel when the track gets mastered.

You put mics on either side of the drum set and you get a drum roll that pans from one side to the other and back and forth if Mike Portnoy decides to go backtrack and then back to his right (listener left).

You get a similar thing if you're recording a grand piano properly, but a keyboard spits sound out of one speaker. That's why you can hear classical piano recordings, except you hear it the other way around vs a drum set because chances are grand pianos are recorded to playback like how you would hear it when you play it.* So you play back a recording of classical pianist in some concert where everybody's in a tux and gown, you hear it like you're trying to learn it; by contrast, play a Kamelot or Nightwish CD, and Oliver Palotai synthesizing a piano will be to the left of Casey Grillo or over where Jukka Nevalainen will be later in the track, then around the same spot and maybe a little behind the drummers if synthesizing a string section of an orchestra in the louder parts.

In the same manner you can have a lead guitar plucking on one side, even in a solo, but the rhythm guitar if playing a very different riff tends to play across both channels, like how you can hear the distortion pedal rhythm guitar from left to right but the lead on a wah pedal is dead center or slightly off center. Or guitar 1 can play the louder distortion pedal track while guitar 2 is getting plucked, usually a clean and softer signal, etc.

*ie drums = snare to the right channel, tom tom to the left channel ie you're looking at the drum usually in the middle; piano = low tones to the left channel, higher to the right because pianists don't sit behind a piano so you might as well hear it like you playing it



When iems/headphones are described as having a large soundstage, does that mean that the iem/headphone is imparting said soundstage, or does it mean that the soundstage of the recording is being accurately rendered?

From an extremely technical standpoint IEMs and headphones can't really recreate soundstage unless the recording is specifically meant for such playback. Most music is mastered with both ears hearing both channels in-room in mind, ie, speakers, then it's a matter of whether the speakers have a wide enough dispersion, the walls don't have severe reflections that either directly mar soundstage nor create or exacerbate extant response spikes that mar soundstage like how in a car system you don't have a Maclaren F1 so you're closer to one tweeter which tends to be too far from the midrange/midwoofer while also being too close to glass that you can't line with acoustic foam (or at least, if you need the car to still be a freaking car) etc.

What soundstage you hear on personal audio systems, with the exception of binaural recordings (ie they're mastered with one ear per channel listening in mind, so they pan the sound sources across both channels accordingly to compensate), will at best be helped along by Crossfeed (it pans sound above a selected frequency across both channels with varying gain ie how loud it will be across both, kind of like applying high pass crossover with balance control in extremely simplified terms) to minimize how harsh direct sound is since personal audio tends to be literally directly aimed into your ears. At least, with the exception of headphones that have the drivers positioned ahead of the ear canals and at an angle similar to speaker angles.

That said you still have to discuss soundstage in these sense of "how severely does it get screwed up by this particular headphone?" like how a Grado will put everything like they're just outside either ear and between your eyes where some Sennheisers, especially with thicker, angled pads, will have everything more filled out instead of those three spots but will occupy a narrower area. That's why some people will go "Grado Prestige Series have a wider soundstage than Sennheisers" even if you have the drum roll going from extreme right to extreme left with very loud cymbals and gaps in between the location of each strike and somehow Sennheisers not making the drummer look like he has insanely long arms as to span the entire soundstage "sucks for soundstage."
 
Jun 21, 2022 at 3:05 AM Post #3 of 34
I'm a newbie here but have been looking around, so sorry if this is a stupid question.

When iems/headphones are described as having a large soundstage, does that mean that the iem/headphone is imparting said soundstage, or does it mean that the soundstage of the recording is being accurately rendered?

Thanks,
Pete

This is actually a very good question.
In my experience both effects exist.

An IEM like the SONY EX1000 (open 16mm dynamic) simply sounds very open, it creates the feeling that the music is coming from oustide your head and far out.
But this kind of "soundstage" is very diffuse, locations of instruments or sound sources are blurred. And this soundstage is wide (R/L) but not deep or layered.

The other type of soundstage you could more precisely describe as imaging (the accuracy and focus of the positions of sound sources), depth and layering.
This kind of soundstage is influenced by the recording, potential mixing and processing in the studio, the DAC and the IEMs/Headphone's ability to accurately reproduce the stage. This has nothing to do with frequency response, but with timing and phase of the sounds which our brain uses to process and reconstruct the sound image.

You are in the UK - try to find a high-end audio store which carries CHORD DACs and go listen to some well recorded live music like:
https://tidal.com/browse/track/15666682
https://tidal.com/browse/track/11648130
https://tidal.com/browse/track/4542031
Compare with your source...

Cheers!
 
Jun 21, 2022 at 4:22 AM Post #4 of 34
I think it's also impossible to say whether any headphone or IEM is accurately rendering the soundstage, since one headphone or IEM will very likely create different staging for different sets of ears, due to anotomical differences.

But people do tend to agree in general on what kind of 'shape' and 'size' soundstage any particular headphone/IEM produces.
In the end, it might not even be about being technically correct or incorrect, as electronic music can also sound 'big' with a large-soundstage headphone/IEM.

If you want to get even more technical, it could be argued that most studio recordings don't have accurate staging since they are not binaural or live recordings.
You could say the staging is synthetic since it is created during the mastering process.
With that kind of thinking it might become something less worth worrying about, but it's still an interesting topic.
 
Jun 21, 2022 at 6:51 AM Post #5 of 34
already answered perfectly well:

”a little from column a), a little from column b)”
mastering sets what it should sound like, good engineers can make some incredible soundstage work happen (alan parsons got me into this hobby!!)
great headphones can recreate, accurately, good soundspace…

it isn’t just width, but front to back depth that generally helps flesh out a ‘real space’.

(and crossfeed, as mentioned, makes headphones perform closer to speakers, as well as headphone tech like ultrasones’ “s-logic”, but also ‘angled drivers’ and generally ‘open designs’)

all components in YOUR setup contribute; eg sennheiser, when building the hdvd800 headphone amp, designed it to correct the oval soundfield that their flagship hd800s played back, the pairing of sennheiser hdvd800 (amp) and hd800s (open back headphones) gets back to a circular soundfield, great for gaming positional audio :wink:
Some amps (and DACs/preamp ie all source kit ‘in line’) can contribute to soundfield, but mostly headphones are the lionshare of the playback chain that contribute to the most obvious sense of space- but, getting back to the first reply; garbage in equals garbage out- if the mastering (/recording) stage doesn’t capture/create a sense of soundstage, then it isn’t there to playback… at which point digital processing creating artificial echo, can fool our brains into the music being in a larger space (akin to Yamaha digital soundfield processing in surround amplifiers for home hifi playback). This ISN’T ideal, and will never equal a “better recording”. (and not many on head-fi would touch this notion of artificially boosting the soundstage of a recording, we are a ’pure’ bunch mostly, and such methods are ‘fun’, but not the way…..)

some albums are mastered for headphones, and will sound amazing when played back on head-fi kit.
I use certain recordings I am familiar with to check sound stage (width and depth) when testing headgear.

again, “great question”, kudos

an example track I use for stage depth is T J Eckleberg “two inches of darkness”.; in the opening few seconds a drummer should present themselves ‘well back’ in the soundfield. how far back they present is how I rate equipment for checking this metric.
enigma’s “principles of lust’ has some great panning
pink floyd- signs of life (opening track on momentry lapse of reason) or just about any thing by alan parson, but “I, Robot” if you want some soundscape tracks that will certainly deliver….
for setting speakers I usually go with live recorded stage events.. it helps to have reference to the original music and how it should sound… (I have enjoyed operas for the halls’ they play in almost as much as the performances on the stage (a bit tongue in cheek with that comment)).
 
Last edited:
Jun 21, 2022 at 3:57 PM Post #6 of 34
I'm a newbie here but have been looking around, so sorry if this is a stupid question.

When iems/headphones are described as having a large soundstage, does that mean that the iem/headphone is imparting said soundstage, or does it mean that the soundstage of the recording is being accurately rendered?

Thanks,
Pete
First, the fun part. Soundstage is used to describe anything and everything around here but it shouldn’t be so.
IMO, soundstage is strictly information you get about the room from the delayed sounds that reach you after bouncing off the walls and everything else. To me it is not about where the instruments are in your mind, that would be imaging.
But even if you stick to that definition that many will disagree with, you will still have the question of which room we’re talking about? Is it the room where the band recorded? Is it the room and speakers the guy mixing the track was in? Or a ”room” made almost from scratch by the sound engineer? Or maybe it is the room you’re in when listening to speakers? Is it whatever feeling you end up with while using headphones? Pick any one option and some people will disagree. :sweat_smile:

Whatever you’re asking about, the brain constructs what it thinks is the most plausible space based on various amounts of cues and your own body, put together into a pudding of experience. It’s easy for nearly anything audible, visible etc(sensed) or expected(psychological bias) to alter your impression of space and sound source localization. So, of course a transducer(headphone, IEM, speaker) being the least accurate and most inconsistent part of a playback chain, will surely have some impact on your experience in general including soundstage and imaging.

About accuracy, you can probably forget about that when it comes to spatial cues on headphones and IEM. Speakers will also create something fake and fairly unnatural, but almost every album ever made was finalized while using speakers, so it’s the ”fake” that would usualy come closest to the final product.

In conclusion, someone talking about large soundstage doesn't mean anything about actual reproduction accuracy. Larger could seem like it’s better given how silly small the soundstage and imaging feel on headphones. I don’t believe there is more to it. But larger doesn’t means it’s not incredibly wrong in other aspects. Also you can find 10 guys mentioning a larger better soundstage while describing significantly different impressions on various headphones.
 
Jun 22, 2022 at 8:31 PM Post #7 of 34
Thank you all for all your replies.

What I really want to know is whether a large soundstage delivered by headphones or iems is really reflecting that of the original recording or whether it is an artifact of the phones. Ie, do they just give an big soundstage to everything that you listen to with them, or do they give a more accurate portrayal of the recording.

The reason that I ask is that I have weird hearing when listening to phones in that I get a really huge soundstage that can really fill the room I'm in, and sometimes beyond. Nothing between my ears (no jokes please). I have a long history of listening to phones that dates back to the Sennheiser 414s when they were a thing, and everything was in my head. But about 3 years ago, that somehow changed.

I can't be the only one like this, surely.

Anyway, I'm thinking of getting some iems, specifically the fiio hd5, because of the good reviews and the large soundstage that reviewers generally report.

But if they are imparting the soundstage by artificial means, that will not necessarily be a good idea for me.

But if they do it by accurate reproduction of the source then I'm all in.

Hence my original question.

I'm hoping that you all can help me here. I can't hear before I buy, so any input would be much appreciated.

Thanks again,
Pete
 
Jun 22, 2022 at 9:59 PM Post #8 of 34
It's not really an either/or question.

Recordings create an illusion of sounds in 3D space, and then headphones take various approaches to reproducing whatever illusion the musicians/producers/engineers built via the two stereo channels.

Try this video with your current headphones and see how they reproduce the sounds in "space" -- front, back sides, overhead. (For best fidelity download the lossless version and try to sync it up with the video.)
https://www.head-fi.org/threads/abyss-ultimate-headphone-test-video.949705/

As others have noted, reviews are all over the place in how well they assess or describe soundstage. You might look for words like accurate, reference or neutral if you're looking for 'phones that don't try to add their own kind of pizazz to the music.
 
Jun 23, 2022 at 1:42 AM Post #9 of 34
What I really want to know is whether a large soundstage delivered by headphones or iems is really reflecting that of the original recording or whether it is an artifact of the phones...

OK...let me condense what I wrote up there.

Technically, and strictly, speaking, they can't reproduce any soundstage, because unless the recording is binaural, the intended playback for how it was mastered doesn't match the reality of the actual playback system because speakers for which these are made for and headphones that you're using are fundamentally different.

Less strictly, it's not that all headphones absolutely can't come up with any semblance of a soundstage. Just that by default it's not even an "artifact" like noise, but more of it just doesn't really work the same way. It's kind of like the not as bad version of taking stereo recording and playing it through a gramophone (ever wonder why some older vinyl have "in stereo!" stickers on them? stereo wasn't even the default; kind of like how some laserdiscs and even DVDs have a "Surround" sticker on them). That said headphones can still provide spatial cues, but much of your confusion comes from it being a reproduction or an "artifact." It's closer to the latter but like I said, not quite the same, and not even "preserves"* the soundstage. It's more a matter of "by how much does a particular headphone screw up the soundstage."

*Let's be honest here...even speakers can't do that. If you play a jazz lounge recording then sure, a good system can do that, but what if you play rock or metal that tends to be played on a bigger, wider stage? Your home system isn't that wide much less be deep enough for the soundstage to even have cohesion (ie you can't have the speakers 6m apart in an 8m wide room and be able to hear the instruments properly placed if you're sitting only 3m away from the midpoint of the two speakers set that wide apart).

...Ie, do they just give an big soundstage to everything that you listen to with them, or do they give a more accurate portrayal of the recording.

The reason that I ask is that I have weird hearing when listening to phones in that I get a really huge soundstage that can really fill the room I'm in, and sometimes beyond. Nothing between my ears (no jokes please). I have a long history of listening to phones that dates back to the Sennheiser 414s when they were a thing, and everything was in my head. But about 3 years ago, that somehow changed.

OK..first off, I seriously doubt they can project the sound so far out it's like the whole room is filled. Even as a hyperbole. It can all sound reasonably out of your head and be a scaled down reproduction at best, like on a K701 or HD800, but it won't fill the room. Not even a K1000 will do that. Hell not even my desktop speakers can do that.

Imaging next to nothing between your ears isn't an artifact. It's an attempt to sound like a speaker. Like how the K701 has a wider dispersal pattern (ie you can use speakers and if the dispersal is too narrow no amount of toe-in or toe-out will make for a big soundstage, or at least, make for a coherent soundstage, not just simply filling the room with sound) and the drivers are mounted in such a way as to mimic toe-in on speakers instead of just firing direct into your ears.

You can try and recreate a similar problem with speakers. Instead of putting them in front, make them flank you and pointed at each other then you sit in the middle. Even the best speakers will not be able to have a good soundstage like this (just ask everybody that blows hundreds of dollars on Audiocontrol or Alpine processors and either blows thousands on custom installation or has a car that has no door panels on weekdays for weeks), then headphones add to that by having only each ear hear only one driver.


I can't be the only one like this, surely.

Of course not. Others think Grados have a very wide soundstage even when it doesn't make sense. Like sure it seems great that when guitars are panned to one side they're really panned to one side, but then the cymbals are also panned to one side along with the tom and snare because....Reed Richards plays drums,* I think. Or maybe Dr Otto Octavius. Or Shuma Gorath. Whatever, point is you need long arms for the cymbals to be out to the flanks where the guitars are.

Then there are people who can't get past how some Sennheisers have a congested soundstage that they can't appreciate how everything is in a spot relative to each other that doesn't make you wonder if Marvel Studios is a recording company for superheroes and supervillains to live out their rockstar dreams.


*This guy, not the actual drummer
1655961934576.png



Anyway, I'm thinking of getting some iems, specifically the fiio hd5, because of the good reviews and the large soundstage that reviewers generally report.

But if they are imparting the soundstage by artificial means, that will not necessarily be a good idea for me.

But if they do it by accurate reproduction of the source then I'm all in.

Well...what is the method for accurate reproduction, and what exactly is artificial means? (Hence my explanations on how this all works)

Accurate reproduction means both of your ears hear both channels (without excessive reflections and spikes), so you might as well give up on Head-Fi then? Unless you can find all your music as binaural recordings where they've mastered it so that the sound pans across both channels taking into account that each ear can only hear one of them.

Accurate reproduction for purists also means "no DSP," which means "no Crossfeed" (I explained this in my prior reply). So yeah...if you're gonna really follow this, then it's binaural recordings or bust.

The mere fact that you can't hear both channels in both ears is, technically, already an artificial manner of listening. And while I tend to see it that way, the headaches of dampening my room's acoustics and how I don't always have my car with a DSP anywhere (I can't bring it with me past the parking lot, I don't have a C130 to fly with my car nor have a Gulfstream with a good sound system, etc) is why I'm mostly using headphones and IEMs now.

If by "artificial means" you mean "they're doing some technical mumbo jumbo," then even then this isn't straight cut. Are they doing that like hooking up a Creative Sound Blaster or Asus Xonar product so you can hear if the footsteps to your left and slightly ahead are closing in from 20m away or are they moving away from 10m away so you can decide whether to crouch in that corner or to chase him depending on whether you have a SCAR-H or an M249, then heck no, there are no electronics there. Is it like a K701 positioning wide dispersion drivers ahead of your ear canals, but in an IEM, more like "not have some weird peaks where it affects the soundstage so let's tune where the drivers are relative to each other and shape the chamber and tube a certain way" kind of like how people decide on whether Borla gives you an option between H-pipes and X-Pipes, not just the muffler itself, for sound and a little bit of bonus torque (or a lot of bonus torque if you're gonna force more air into the intake side too).
 
Jun 23, 2022 at 3:42 PM Post #10 of 34
OK..first off, I seriously doubt they can project the sound so far out it's like the whole room is filled. Even as a hyperbole. It can all sound reasonably out of your head and be a scaled down reproduction at best, like on a K701 or HD800, but it won't fill the room. Not even a K1000 will do that. Hell not even my desktop speakers can do that.

"OK..first off, I seriously doubt they can project the sound so far out it's like the whole room is filled. Even as a hyperbole. It can all sound reasonably out of your head and be a scaled down reproduction at best, like on a K701 or HD800, but it won't fill the room. Not even a K1000 will do that. Hell not even my desktop speakers can do that."
Doubt away, but I'm telling you that even the Samsung buds2 that I am listening to now fill the room for me. No hyperbole. But perhaps you misunderstood what I said before ...
 
Last edited:
Jun 23, 2022 at 11:00 PM Post #11 of 34
perhaps (!)

smiles; this doesn’t have to be ‘rocket science’, and many of the ’science’ (purely) peeps get caught up not seeing the forrest for the trees’.
True, much of what has been said might be relevant (to another question/in a different forum) but trying to stay focused to the question with regards to the intent being asked;

Subtle echos (reflections) that can be captured in a recording (or artificially added (at a mixing/engineering stage OR using post processing, such as DSP in a preamp)) will allow a listener to INTERPRET a sense of soundspace.
Humans require about 5 milliseconds of delay between direct sound and second reflections to accurately detect a sense of ‘difference’ between the ‘two sources of sound’ (the reflected sound allows our brain to do the math and interpret how big a space we are in/size of a ‘recording’ we are listening to.
A simple rule then becomes ‘have speakers approx 85cm from the rear wall’ (MINIMUM) to allow the full 160cm (5 milliseconds time at ‘the speed of sound’) passage of the sound waves and have the second ‘wave’ of sound have enough time delay to be discriminated by our brain… (less time just means inability to seperate the two as distinct sound sources)

Regarding speakers (I will talk about headphones in a moment), there is more to this equation/setup that is required to create a great listening space, which will factor the rate of decay and decay time/level in ways that are pleasing and help to make a true reproduction as any engineer might hope an end user to have.. (assuming the programme material is being mastered for speakers, which, in the modern world, MUCH IS NOT- there are tonnes of recordings that are made for headphones and rely on the lack of ’crossfeed’ (left channel making it to right ear/right channel making sound that gets to left ear)..
Unkle (band) -“never never land” (album) says in the opening few seconds ‘play under headphones’ (preferably in a dark room) -this is NOT orchestral/classical!! (you are forewarned)
Some albums are mixed in prologic, and contain the necessary ‘steering cues’ for a surround decoder to play back with full surround panning via a home theatre system,.. generally nice enough to put a ‘dolby surround’ badge on the disc when this is the case…
Just trying to point out that ANY engineer can have an intent, and THAT BEING THEIR TARGET, is usually what they will achieve… Given just how many people master/mix AND CREATE under headphones, A LOT of music is engineered to sound ‘right’ via headphones..

The band ‘Yello’ were masterful at the mixing/engineering stage and many of their tracks can show up equipment/systems to be amazing..
The talent of the engineer can make just about anything happen..
Many engineers ENJOY playing with the soundfield in ways that can alter ones perception (eg Nine Inch Nails and the aforementioned (earlier post) Alan Parsons). Alan Parsons is famous for some of the work that went into a ‘well known’ album by Pink Floyd (Dark Side of the Moon), and there is a reason so much of the soundbytes are locatable in 3D space; this wasn’t by accident and A LOT OF EFFORT went in to creating these effects…

Regarding headphones then, assuming we have recordings that have echo and ‘reflections’ encoded in the mix that can then pass a ‘sense of space’, headphones can playback beyond the walls of their physical boundaries.
Psychoacoustics is an incredible science that as far back as the mid nineties, high end televisions had ‘surround’ processing chips that could, using some very scientific understanding of the human head, have two speakers taht could simulate sound going ‘all around us’.
There are some needs to control variables of the environment for this to work best, and is why technologies like Q-Sound came to market (the most famous song at the time, Paula Abduls’ Opposites Attract; featured MC SkatCat- MC SkatCats ‘spin off’ album was a Q-sound engineered work, but as were arcade games like Street Fighter II Championship Edition..
Q-Sound, (and the TV surround tech) worked best when they knew the distance between the speakers and the distance to the listener (can you see how this would be PERFECT for headphones as a technology?)- obviously arcade cabinets knew where the player was standing and the location of their speakers, so a ‘Hadouken’ (fireball being thrown from Ryus’ hands) could sound appropriately ‘in your face’.
The math is the same as what goes into binaural recordings.. the pinnae of our ear, the ‘flesh around the ear canal’ takes of high frequency information from a sound source- it is how we humans, with only ‘two ears’ can hear in surround (we have a sense of front to back with regards to sound sources).
Some technology can take this ‘to the next level’, such as Audeze Mobius headphones (that also feature waves NX technology that tracks the physical direction we are facing with our headwear and can move the recorded stage around to ‘stay still’ when we turn our head.
Much of the way humans listen acutely is to ‘turn the head’ a little, to try to determine the exact source of a sound.. Waves NX is amazing, but it is the combination of that with the advanced DSP that creates HRTFs (head related transfer function/fields) (like Qsound and binaural technologies), using the ‘math‘ of what our pinnae does to sounds that are ‘behind us’ (loses some high frequency strength, and therefor is placable behind us); the technology is amazing and when done in realtime on headwear like the Audeze Mobius- there is a reason most reviewers think they have left the ‘home stereo’ on when they start to play music through them.. (no other headphone I have EVER heard can place sound beyond the walls of a listening space (in our ‘minds eye’) (edit: as WELL as the Mobius headphones can..).

As someone who spends 7 hours (plus) adjusting ‘toe in‘ on my stereo setup to make the soundfield recreate ‘beyond my walls’ -ignore anyone who tells you this isn’t possible; it is THE AIM of a good setup to be neutral and allow recordings to shine in the ways that they were recorded- an unthought out setup has many limitations regarding playback in a physical space vs a well made setup for HOME LISTENING (very different to the needs of a studio engineer), sound can, especially using tricks like TVs had built in (ie 30 year old embedded chips could do this ‘on the fly’), give us a sense of a plane flying overhead and a helicopter coming from behind.
The truth for much of those trickeries lies in having a solid sound pan.. it is hard to simulate something behind us if our brain hasn’t heard it ‘in front of us’ (the absence of some high frequencies would simply make the sound source seem dull, vs hearing the sound ‘intact’ (including the high frequency information) gives us a sense of what it sounds like, then that same sound(voice/helicopter etc) can have its sound properties altered and ‘psycho acoustics‘ does the rest.. (we interpret the sound as ’moving/moved’ behind us).

So, regarding headphones?
Sure some designs do a better job of giving us a sense of soundstage (not just left to right, but also front to back).. I will call this ‘soundfield’ as this thread seems to state they deserve different definitions.. (language SHOULD BE PRECISE, so we can convey concepts, yes?)
Generally open back designs aide a sense of air, and I have had the most luck with HRTF encoded sources (eg Dolby Headphone playing from a PC soundcard) using large angled drivers headphones (like Definitive Technology Symphony 1s as they are fairly neutral in sound profile (a requirement for HRTFs) but there are a tonne of ‘good for HRTFs’ headphones on the market)
The trick of achieving a soundstage (/soundfield) with ‘in ears’ is super easy on any with DSP processing.. (think ‘like the Audeze Mobius does’), but other companies try to come up with methods to angle and ‘give air’ (Bowers and Wilkins first in ear used a mini ball bearing composite that allowed sound to pass out the back)..

Some do soundspace better, and is usually a marriage of ‘high quality‘ (needed for the microdetails our brain wants to hear (the echos/second reflections etc), some Sennheiser IE80s have an incredible sense of ‘space’.
Do some headphones take it to far, or ‘not match up‘ to the intents of the recorded material: Yes - of course.. no one knows what the recorded material was made for/mastered to achieve..
Someone who has headphones they use for mastering without much sensitivity to low level echo information, might over emphasis this info (making everyone else feel like they are in a tight echo chamber), or the engineers headphones might have a ‘very flat’ soundfield (not much front to back) so they emphasis details that recreate MORE front to back space.
Played back on headwear that has an exceptional sense of front to back, this might then become overwhelming or ‘very unnatural’.
Many headphone reviewers try to explain what the soundfield IS on any given set.. ie ‘not much front to back and a very wide left to right’ or well,.. any combination can be had.
The easy example in my head is that the Sennheiser HD800 has an ‘oval soundfield’ (not a perfect circle, which would be better for gaming HRTFs, but, our brain acclimatises quickly to these things) and the matching amp, the HDVD800 was built to return it to ‘a perfect circle’.. these are not my thoughts, just what I have read others say regarding those flagship Sennheiser openbacks, often considered some of the best sets for gaming sound (also the AKG K701 headphones etc)..

Just like some movies pan beyond the width of your TV screen, and some do not (the onscreen action matching your speakers), engineers do NOT KNOW our listening environment (Dolby Atmos and DTS X are a step to resolving those differences), and so ‘every recording is different’.
Headphones, being two point sources just beyond our ears, go a long way to being predictable (yet they all have different sound profiles, and the equipment chain can alter the tone and location of sounds ‘for a range of reasons’ (intended AND unintended), so I can see where this can become a confusing topic.

The takeaway is ‘Some headphones have incredible sound’field‘ capability, and others less so’, and some recordings can do wonders with soundfield.

Given we use both headphones AND recordings when enjoying a music track, it simply becomes “a little from column A, and a little from column B” (they both contribute)
And, modern DSP can do absolutely anything with regards to sound placement that we perceive, so ‘ignore this post’ and ‘enjoy’ (you ARE experiencing sound ‘beyond’ the speaker drivers and this is subjective to you.. -ie everyones’ ears are subtly different, and we can all fool our brain, either consciously/‘with intent’ or subconsciously, this is ‘a thing’ and is most certainly happening.

Whilst this was a long post, I hope it simplifies the concepts involved.. my aim was to not try to mystify or keep this knowledge occult; and head-fiers that live and breathe this stuff daily can forget that some of the basic assumptions we hold is actually really advanced science and the evolution of much thought and process..

I recommend anyone who wants to experience sound beyond the venerable HD800s (sans dolby headphone/HRTF encoded source)/K701s: try the Audeze Mobius.. with WavesNX engaged AND surround, the positioning of sounds will make you ‘check the front door’/wonder what is going on ‘rooms away’.. (although music can be ’a wee bit funny’ in these modes (they are defeatable), the experience is ‘something else’)
 
Last edited:
Jun 24, 2022 at 12:33 AM Post #12 of 34
Doubt away, but I'm telling you that even the Samsung buds2 that I am listening to now fill the room for me. No hyperbole. But perhaps you misunderstood what I said before ...

So you can hear sounds like they're coming from 2m away to the front and triangular distance (ie farther out) to the flanks, ie, like a room-filling home speaker system? Because I outlined that there's a difference between not sounding like it's all literally inside your head or projected outside of it and actually filling a room that isn't an Asian big city bare minimum studio apartment (or a capsule hotel).
 
Last edited:
Jun 24, 2022 at 1:02 AM Post #13 of 34
Erm, 'yes'
A deaf in one ear 90 year old (with frequency limited hearing in the one remaining/'good ear') could pick the sonic cues at the start of "Two Inches of Darkness" (superhydrated -T J Eckleberg), due to the recorded cues... (yes, even in 'mono')..
Be in under headphones (head'phone' (mono!)?), or using speakers - too many assumptions regarding setups...
How do we know 'any given room size'.. if that ninety year old was in a coffin (under headphone I'd imagine) or on an operating table (hospital)- a horrible acoustic space..
Some cues are recorded that was...
Our brain doesn't switch off a lifetime of learning audio cues because someone THINKS that their rules for head-fi are the OBJECTIVE REALITY that all exist under.
Engineers USE audio cues to create interesting mixes.
I never knew one of my favorite bands was Electronica (I'd have thought them 'rock'),.. but when I widened my recording collection with a few of their earlier and later releases I quickly realised why I liked what I heard..
They used very clever effort in the studio to do some great panning tricks and psycho acoustics for placement beyond what typical music would often play back in (in terms of soundfield).
War of the Worlds, something I listened to as a child, from vinyl using nice closed back headphones.. amazed me.. the sonic cues transported me to places...
We have (mostly) all heard a radio play where they use sound effects to simulate spaces..
The physical space limitations of our playback (whether we be in a 2metre x 1metre box or a 'very large space') are only apart of the equation, and having owned some 'world class' setups (and a lot of setups less incredible), I use certain tracks to qualify 'what tier of kit I am listening too'.

That aforementioned T J Eckleberg track ("Two Inches of Darkness" from the album "Superhydrated") is a quick check of stage depth/performance..
On a home theatre rig, often resolves at only 20 feet back (yes, 'beyond the wall') and a setup that plays it like that I write off as consumer-fi junk and stop listening about there (typically two or three other tracks to confirm just how 'bad' /deep that 'rabbit hole goes').. on a decent system it should be 'at least thirty feet back'.. good kit resolves about fifty feet back.. and when my mate installed a nice r-2r dac, it blew my mind... and resolved the drummer at the start of the track as being further away than I had ever heard before..

Now, as to what the artist intended the drummer be placed, distance wise, from the front stage... I don't know.

A TDK test CD that came out in the late nineties, had drummer Jim Keltner playing a range of spots in depth, on a stage.. (a narrator would call out how may feet back they would be),.. the idea was on a well setup system, we should be able to resolve the audio cues that allow our mind to interpret 'sound beyond our physical walls/speakers'.

Playing with toe in (and speaker placement) is a huge part of this.. as is controlling the reflections in any given space.. and there is 'a lot of math involved'.
As a person who has been behind the audio desk, and helped in a few stage locations, the math involved in 'pre setup' (of world class audio rooms) is phenominal.
For my own hobby interests, having been doing this for many decades, I generally do 'alright' in my head, and with a few measurements using my feet 'scouting out a room'..
Of course having a second person helps make the task 'doable' in a reasonable timeframe.. (and I don't even bother with subwoofer tuning sans 'second person')...

I think we must be arguing 'different things'.. (and no ones subjective reality trumps anyone elses' subjective reality; and we all employ information filters, most engineers I have met swear by their THX amps as the best things on the planet (and simply don't believe they can be bettered)- to each their own); I feel we need to focus on the thread question being asked here:
when reviewers talk about soundfield size regarding headphones, are they talking about the recording or the headwear..
Reviewers are talking about the headwear they are testing.
They use familiar test tracks for this task, so they have a baseline reference.
Of course, we are trying to inject that recordings ultimately can alter/create a soundfield beyond any physical soundfield that is inherent to a room or speaker placement (including headphones)..
So I keep trying to state.. that soundfields as we hear them are 'a little from column A (setup), and a little from column B (recording)'; of course more factors CAN be involved such as 'a little from column C' (DSP/HRTF encoding; and yes, I know those effects can be recorded into the album.. but in the present/modern world,.. many earbuds and phones are doing this 'on the fly' to all music they play/albeit defeatable).

Short answer, yes- i definitely hear sounds beyond my listening space. If I didn't I would junk the stereo system (cause it is junk-fi) and buy one that could correctly resolve the recordings that I know present themselves as being in 'large spaces'.
 
Jun 24, 2022 at 1:12 AM Post #14 of 34
oops - specific to the last question asked; yes; 'beyond the triangle' of speakers to listener!

If I play Deep Forrest 'white whisper' (or sweet lullaby) and don't get panning four metres beyond my speakers position (in the present room), then I'd scrap the setup and 'start again'.
I actually like finding orchestral versions of albums I like,, such as symphonic Deep Forrest (apple music), or symphonic Pink Floyd (london philharmonic), or symphonic Tubular Bells, or even EnzSO (new zealand symphonic orchestra playing split ends songs); why?
Cause I don't want to only play classical music genres when testing out systems/setting up kit.. ( I like variety)
As many have come to figure out, symphonic orchestras with their 'rows of musicians' and large recorded environments are some of the hardest recordings, genre wise, to recreate.
( I also like Nine Inch Nails due to signal to noise ratio and engineering quality- nothing like having a man whisper over the top of a rock band at rock out volumes..)
 
Jun 24, 2022 at 10:06 PM Post #15 of 34
edit @ProtegeManiac aniac
My last post wasn’t meant to be arrogant/mean; it just frustrates me when head-fiers who registered aeons ago and with huge posts counts get in the public space and shout mistruths..
-On this logic I must frustrate many head-fiers with my sillyness re: cables/DACs and amps contributing to sound in ways that many do not feel is true -and I likely insult many peoples setups as I know the kit they are using isn’t good enough to resolve basic audio truths, like those using entry level (price point) surround amps and then state that ‘all DACs sound the same’ -(on their system, they possibly do, but they argue that all amps sound the same, hence as long as they have enough wattage- their ‘amp is fine’)

not trying to sell cables, or, well ‘anything’,..
the aforementioned system wouldnt need be ‘junked’, necessarily, but some room placement consideration (and control) might need be employed/entertained.
and to make my answer clear, ‘yes’ to the left of my speaker (or the right) beyond the ‘triangle’ I have much placements of sounds.

the time that best identified ‘correct setup’ to me, I was looking at buying a second hand turntable- the fellow had done an incredible job with their main speaker placement… I heard sounds on the far walls (or beyond them) and behind me, and I said- “have you got other speakers engaged”- with a big grin the person said “nope!’.
He knew his setup was perfectly tuned- he had had recording engineer friends literally jump when hearing this setup (professionals in the audio industry for decades)- it was simply perfect, and the room dimensions and layout did not look like it would perform the way it did.

That week I went back to the drawing board and redid my speaker layouts, to ‘great effect/affect’ (pun intended)

with some effort will come (some) reward.
there are a lot of guides on the internet re: speaker placement..
I suggest reading a few and finding one(s) that work (for you).

it is true some DACs (circuit/not ‘the chip’) can make sound ‘up front’/in your face, or resolve much further away, and most audiopeepz seem to not recognise how important a great preamp can be (with regards to stereo imaging width/‘prowess’)..

when I use a burson conductor, well regarded as a ‘decent preamp’, I find it quite poor at this task.. vs a proceed avp2 (considered a decent preamp)-they are different classes of kit..
truth is they are both excellent preamps, but it becomes easy, when directly comparing them, to see the effects that ‘good kit’ can bring to recording playback.
(the burson shrinks the stage, sure, but is leaps and bounds better than what is ‘beneath it’).

it is why longtime professional reviewers know to compare kit based on the tiers they belong to..
my present setup would be mostly tier c kit (or below) - I have owned tier ‘a’ kit, but generally 20 years after it came to market (champagne taste on a beer budget), at which point, generally better kit in tier a would out resolve what I have, and hence I’d slip it into tier b kit to label it correctly.

most consumers start with tier e kit (and the world keeps pushing new pricepoints and ‘cheaper sound’) so tier f kit is ‘a thing’ and is what you get with no effort towards audio chains…
sadly many reviewers rate DACs skirting at tier d sound quality using tier e kit, and hence say ‘said part sounds the same to me’ as all other (tier e) DACs..

of all the friends I presently know with sound setups reaching into five figures pricepoints, none have ‘tier c’ quality setups- tier d can be had for a sizable step up over entry level (generally each part of the chain costing 3x more money that the audiophile entry level part). my friends will have a mix of tier d and tier c pieces, and the ones closest to tier c sound do so by buying antiquated second hand ‘wherever possible’… (like I do)

as an easy example, I have a pile of flagship receivers that cost north of 6000$ (aus) when new, which might cost me 200-600$ second hand. they might not have the ‘latest surround formats’, but as offboard power amps to ‘modern processors’, I can get great sound, vastly better than spending many many thousands on new kit (still need a good /modern processor though).. using offboard power amplification, these ancient ‘flagship’ surround amps can make a decent ‘modern’ surround system.
they’d be roughly equal to tier d ‘stereo’ amps though, and ideally I wouldn’t use them for two channel setups.. -I split my two channel and surround setups, and my recordings to suit each setup.. ie compressed for radio/mainstream crap goes on the surround setup (in 2 channel mode), and the nice ‘high fidelity’ recordings (think:well mastered for high fidelity setups) live in the two channel/‘stereo’ setup.

case in point- a ‘budget’ nad 3020, famous 40 years ago for sounding alright for an ‘entry level amplifier’, will flog/‘seriously outclass’ a lot of ‘new’ amps.
the last nad3020 amp I bought was 20 years ago, with a rotel rb850 power amp, for 200$. no modern 200$ amp could touch either of those amps for stereo sound quality-irrespective of the associated ‘spec sheets’ and the modern kit ‘looking good on paper’.

most people who haven’t heard good two channel, will argue that their kit is ‘all that’.. it usually ‘is not’.
having great equipment allows testing variables- too many forum warriors have substandard kit, and argue they have ‘tried it all’ (often thinking a 1000$ DAC sounds the same/equal to a 150$ DAC etc)
a lot of arguments/basis for arguments (such as ‘cables alter sound’)can be easily experienced on well setup rigs, but most do not have a) the equipment or b) correctly setup to experience these things.. nor c)the ear training or understanding of which select parts of certain recordings that reveal obvious differences when making certain changes

(i figure if i am going to seem arrogant, might as well go ‘all in’)(i’m not actually an audio snob, and love building budget setups that can hold their own against much more costly projects. hint: i read second hand trading posts)
well setup twi channel rig should resolve sound beyond the speakers. if not, it can be source (clock chips/ and transport quality), DAC (circuit, not ‘the chips’), preamp, amp, speakers, room setup… and of course, recordings that push boundaries.
eg vanessa mae, violin lovers concherto is recorded to sound ‘well back’; it doesn’t extend much beyond my left/rights, but does throw a nice distance beyond the rear wall.
setup and recording, and ‘yes’,’beyond boundaries’
 
Last edited:

Users who are viewing this thread

Back
Top