Soundstage Width and Cross-feed: Some Observations
Status
Not open for further replies.
Dec 12, 2016 at 8:23 PM Post #61 of 241
yup. once the sound starts changing with the head movements, the brain will stop questioning if it really comes from speakers. in fact every little thing counts, it's a game of reverse bias ^_^. for example I feel like the sound is much more convincing when I'm listening to headphones in front of my actual speakers, and from time to time I end up wondering if the speakers are ON. something that almost never happens to me when I don't have the pair of speakers in front of me.
head tracking is that one little extra thing that may let your brain relax thinking that everything is fine. the game isn't to have the "real sound" whatever that is, but to have the brain stop getting too many contradicting cues.
 
I have the NX head tracker but it doesn't work with win7
angry_face.gif
because it uses the bluetooth LE thing. so it's pretty useless to me(BTW FU windows, my computer was laggy and 30degrees hotter because I refused to install one unclear update package!!!!!! please force me to install the telemetry thingy in an even more obvious way...). *deep breath*
so I can only use it on my portable android stuff with the app that still enjoys crashing from time to time. it's not a global rooted sound path like it is on the basic computer software(the studio mix thing is a VST, not a virtual device), so no movies with head tracking on the train for me :'(.
anyway if you have a nice webcam and a lot of light, you can demo the NX softwares on you computer with the headtracking done by the webcam, and see how you feel about it.
the settings with the wave NX guys is limited to entering the size of your head and the distance between your ears going around the back. that is used for a default head model that works well for me as a crossfeed setting. but while close enough and maybe better than a few random default crossfeed settings, it's not based on our own HRTF.
on the other hand the Realiser(way more expensive and demanding if you want to set it right), will actually measure the sounds coming from your real speakers and then from your headphone with mics in your ears. so of course the result will be much closer to what it should be. the mics aren't positioned at your eardrums
eek.gif
but it's probably one of the closest thing you can hope to get when it comes to individual customization and convincing panning.
or of course you can try plenty of crossover solutions with plenty of settings and find one that seems to work fine. I've spent years being very satisfied with xnor's crossfeed. not perfectly satisfied, but satisfied enough to almost never listen to headphones without it(I even convert some music in foobar including it for my DAPs).
 
Dec 14, 2016 at 9:45 AM Post #62 of 241
Hello,
 
To avoid jumping too fast into HRTF experiments, perhaps you could try OutofYourHead, which is an HRTF software modelization from real music halls:
 
https://fongaudio.com
 
Claude
 
Dec 14, 2016 at 11:36 AM Post #63 of 241
A detailed article about cross-feed for headphones and future technologies (not available in the market now) has just been published here:
http://www.dirac.com/dirac-blog/how-to-make-headphones-stereo-compatible
 
:) Flavio
 
Dec 14, 2016 at 1:46 PM Post #64 of 241
  Hello,
 
To avoid jumping too fast into HRTF experiments, perhaps you could try OutofYourHead, which is an HRTF software modelization from real music halls:
 
https://fongaudio.com
 
Claude

Thank you very much.
beerchug.gif
 This was just what I was looking for. It's a bit expensive but just might be worth it. 
 
Dec 18, 2016 at 8:16 PM Post #65 of 241
Cross-feed is an interesting can of worms.  In very superficial and approximate terms it goes a bit like this.
 
Cross-feed (sending some of the right channel information into the left channel and some of the left into the right channel) creates problems when done by itself.  The idea is to get some of the sound field qualities of speakers to recover soundscape from a recording.  It starts work better when you add a little time delay in each of the cross-fed signals, after all it takes longer for the right signal to get to the left ear than is does to get to the right ear when listening to speaker and the ear is very sensitive to time delay for placing location in certain parts of the audio information.
 
A second problem is equalization, this comes in two parts ;
 
A - Your face is a filter.  As the right channel sound travels around your face to the left ear its frequency response is altered, that is it is filtered by the trip.  You mind is very sensitive to this as all you life you have used some of his data to help you locate the direction from which a sound is coming.  
 
B - The pinna of the ear is (that part of the ear the sticks out the side of your head) is also an important filter giving the mind clues as far as location goes.  
 
Headphones bypass the operation of the both the face and pinna filtering by sending sound directly into the ear canal.  Physical direction clues are lost so the directionality is harder for the mind to determine thus confusing the mind and inhibiting the illusion of space, time, and what we call soundscape.
 
One more big problem exists, to summarize it one must remember that recordings are created by listing to speakers when mixing the various sound sources so the soundscape of the recording is created in that context. Virtually all recordings are done in with monaural audio information (one microphone per source, a voice, guitar,or  whatever) so there is no stereo audio in the original audio data.  Stereo data is accidentally recorded when when a second microphone pick up the signal, such as one of the drum microphones picks up a guitar signal.  Contemporary recording engineers don't know a lot about this and their work tends to be very 2 dimensional or flat.  When I was doing a lot of recording in the 70's we spent a lot of time doing things like double 'mic'ing vocals and adding room noise microphones to the audio data stream to provide this audio information.  We also avoided direct inputs to the mixer, but all this is beyond the scope of this post.  It is sufficient to say that the art of recording has gone down hill a lot.
 
I have been working on a spaciallzer intended for headphones dealing with these and other issues but am still thwarted by the lack of good recordings.  More on this some other time.
 
Barry
 
Dec 19, 2016 at 3:07 AM Post #66 of 241
Originally Posted by barryt /img/forum/go_quote.gif
 
[1] Physical direction clues are lost so the directionality is harder for the mind to determine thus confusing the mind and inhibiting the illusion of space, time, and what we call soundscape.
 
[2] Virtually all recordings are done in with monaural audio information (one microphone per source, a voice, guitar,or  whatever) so there is no stereo audio in the original audio data. 
[3] Stereo data is accidentally recorded when when a second microphone pick up the signal, such as one of the drum microphones picks up a guitar signal.  
[4] Contemporary recording engineers don't know a lot about this and their work tends to be very 2 dimensional or flat.  
[5] When I was doing a lot of recording in the 70's we spent a lot of time doing things like double 'mic'ing vocals and adding room noise microphones to the audio data stream to provide this audio information.
[6] It is sufficient to say that the art of recording has gone down hill a lot.
 
[7] I have been working on a spaciallzer intended for headphones dealing with these and other issues but am still thwarted by the lack of good recordings.

 
1. Is that really the audiophile term being used these days? That's confusing because in pro audio "soundscape" means something different.
 
2. That's not really true. In acoustic genres, such as classical music, stereo mic'ing is employed, although commonly not only stereo mic'ing. Even if we're talking about rock/popular music, standard procedure for recording the drum kit is to include the use of stereo "over-head" mics. It would be rare for there to be no stereo audio in the original audio data but even if this were the case it doesn't matter because it's not the original audio data which gets distributed, what gets distributed is a "mix", typically a mastered mix and the mix process involves adding stereo information (reverb for example).
 
3. When one or more mics picks-up an additional instrument, other than the one intended, it's is called "spill" and can sometimes cause issues. However, spill does not cause stereo data, accidental or otherwise! If, as you (incorrectly) state, everything is a mono source and recorded as a mono source, then any spill into one of those mono recordings is not going to magically turn it into stereo data, it's still going to be mono, albeit with some phase issues when mixed. The stereo data in a mix is deliberate, not accidental, it's either recorded as stereo, mixed as stereo or typically some mixture of both.
 
4. With the advent of cheap, accessible computer based recording systems it's certainly not uncommon to find people buying a mic, installing a cheap DAW and calling themselves a recording engineer (and/or producer) and commonly knowing next to nothing about stereo mic'ing or indeed many other aspects of recording. However, these bedroom/home studio "engineers" are quite different from real professional recording engineers working in commercial studios, who most certainly do know "a lot about this" (stereo mic'ing)!
 
5. And that's still common practise today, although today we also have the option of creating that digitally/artificially. Again, often it's some combination of both.
 
6. No, it's most definitely not sufficient to say that! What you're stating can be the case, for instance the example given in #4 but in the case of the bigger commercial studios the opposite is true, the art of recording has improved "a lot" since the '70s.
 
7. You are going to struggle if you don't really understand or are mis-informed about the "issues" you're attempting to "deal with"!
 
G
 
Dec 19, 2016 at 6:10 AM Post #67 of 241
  Cross-feed is an interesting can of worms.  In very superficial and approximate terms it goes a bit like this.
 
Cross-feed (sending some of the right channel information into the left channel and some of the left into the right channel) creates problems when done by itself.  The idea is to get some of the sound field qualities of speakers to recover soundscape from a recording.  It starts work better when you add a little time delay in each of the cross-fed signals, after all it takes longer for the right signal to get to the left ear than is does to get to the right ear when listening to speaker and the ear is very sensitive to time delay for placing location in certain parts of the audio information.
The above-linked article spells out why your simplistic explanation doesn't work.
A second problem is equalization, this comes in two parts ;
 
A - Your face is a filter.  As the right channel sound travels around your face to the left ear its frequency response is altered, that is it is filtered by the trip.  You mind is very sensitive to this as all you life you have used some of his data to help you locate the direction from which a sound is coming.  
 
B - The pinna of the ear is (that part of the ear the sticks out the side of your head) is also an important filter giving the mind clues as far as location goes.  
Hmmm....well, that's the start of it. The head, face, chest, and pinna are all filters, but they are directionally variable filters both in response and time delay, and not easily simulated.
Headphones bypass the operation of the both the face and pinna filtering by sending sound directly into the ear canal.  Physical direction clues are lost so the directionality is harder for the mind to determine thus confusing the mind and inhibiting the illusion of space, time, and what we call soundscape.
Yes, but of course, the recording also "bypassed" the HRTF, unless it was made binaurally. The analysis above would be of concern if the holy grail of all sound reproduction was that of two speakers in a room. But two-speaker stereo is so critically flawed, I'm not sure why anyone would actually want to simulate only that with any sort of cross-feed system.
One more big problem exists, to summarize it one must remember that recordings are created by listing to speakers when mixing the various sound sources so the soundscape of the recording is created in that context.
If your point is that recordings aren't mixed specifically for headphones, right. And that's because two-speaker stereo has been the traditional reference format for many decades. That stuff played on headphones may not be right, but its acceptable to most people. Try playing a binaural recording on speakers, though...it's awful. They've made the right compromise.
Virtually all recordings are done in with monaural audio information (one microphone per source, a voice, guitar,or  whatever) so there is no stereo audio in the original audio data.
No, wrong. There's plenty of real stereo being recorded, has been since the beginning of stereo, and continues quite well, thanks. And yes, there's a lot stereo audio in the original audio data of many recordings.
Stereo data is accidentally recorded when when a second microphone pick up the signal, such as one of the drum microphones picks up a guitar signal.  Contemporary recording engineers don't know a lot about this and their work tends to be very 2 dimensional or flat.
As gregorio stated, you've just described "spill" or "bleed", and that doesn't provide any sort of stereophony.
When I was doing a lot of recording in the 70's we spent a lot of time doing things like double 'mic'ing vocals and adding room noise microphones to the audio data stream to provide this audio information.  We also avoided direct inputs to the mixer, but all this is beyond the scope of this post.
Still done today, more than you (apparently) know.
It is sufficient to say that the art of recording has gone down hill a lot.  
Couldn't disagree more. The art of recording has been in continual refinement since its inception. There are some spectacular performances from the 70s, but the recordings are not technically anywhere close to what we are now capable of. You might be locked into some sort of genre-specific analysis here, but it doesn't apply to recording in general.
I have been working on a spaciallzer intended for headphones dealing with these and other issues but am still thwarted by the lack of good recordings.  More on this some other time.
 
Barry

The problem I see with a spacializer is that it would have to be adjusted for each individual recording, if not track, and the result would also have to be calibrated to the individual and his headphones. As far as calibration, IIRC, that's what the Smyth Realizer does, though it seems targeted at reproducing a surround channel plan, and I don't recall calibration for specific headphones. The fact that its replicating a surround format gets them largely out of needing to tweak per track.
 
My own experiments with cross-feed in both headphones and speakers led me to believe that you can make it work very well, but only in a very specific set of conditions, and can optimize it for one person. I did a lot of work in the early 1980s with stereo image enhancement, making the soundstage (that's the word you're looking for, right?) bigger, and outside, above, and behind the speakers. I found you could place a virtual source anywhere in a room with two speakers, but only if the room was acoustically perfect and you didn't move your head out of the vise.  Even if your goal is to place sounds in an accurate 2D position, two speaker stereo fails.  








 
Dec 19, 2016 at 9:07 AM Post #68 of 241
^_^ I liked his post because it had a few of the general concepts and was easy to read for the guy who has no idea about crossfeed and what it tries to do(or why it usually fails). of course a few stuff are arguable, any idea about how things were better before while involving technology tends to leave a strange taste in my mouth for example. but all in all I liked the post^_^.
 
 
about crossfeed and trying to replicate speakers, my personal vision is that the industry isn't all that focused on making an album of the sound as it was from seat C14. it's one big fantasy for the elite audiophile, but the pros don't seem to really prioritize that aspect of sound. the aim seems to be usually more about getting a clean sound and a subjectively good sounding result. I see recording and mastering as much as a technical process as it is a creative one. the art doesn't end at the band moving lips, fingers, and feet when an album is done. and as such, having Neil Young's wet dream about the sound like the band played it(if possible without his hearing), feels like a waste of time to me the average music user as that's not what ends up on most albums I love anyway. what does, is the track the mastering engineer played on his speakers. so from a fidelity perspective, I feel that it's the almost achievable target I must aim for. getting to hear what that guy heard when he finalized the job. 
I won't make a different room replica of each studio where an album I like was recorded, because I'm broke, and because it's madness. so I'm fairly happy with ok-ish speakers in whatever room I can afford. it's still an idea of fidelity but with lowered expectations and increased tolerance margin ^_^. big nice tolerance margin.
 
now with headphones, I strive for the same sound. maybe I'm wrong, but that's what I enjoy and hope to get.
 also without any DSP, I'm "visualizing" most singers on top of my forehead and it's not a lot of fun. I know that not everybody experiences that(lucky you), but that's me on most albums with most IEMs, headphones, and EQ I've tried. so to me default panning with headphones is lame and often annoying. really can't share the satisfaction of most people in that respect. if I had nothing else, sure. but when I can switch between speakers and headphones, headphone never wins(is it ok to say that on a headphone forum? ^_^). with some crossfeed that works fine form me, with some EQ, I enjoy headphones a good deal more. as I mentioned above, with head tracking it's yet another little thing that I enjoy. all point toward imitating speakers.
also I tend to find a more grouped panning to be less fatiguing. I'm very fine with instruments within my imaginary field of vision. maybe I'm paranoid and having a guy behind me even virtually makes me think that the guy will take out a knife from his guitar like in some spy movie and stab me in the neck, or any other perfectly rational reason to be stressed by a non existent character. ^_^ my assumptions about the cause are a little vague
tongue.gif
, but I do tend to find crossfeed less tiring.
 
so to me crossfeed is love.
 
now I'm not a huge fan of binaural recordings, I'm too much of a noob to know why and that too annoys me a little, maybe because other recording methods let us get a more intimate sound, maybe because whatever people do at the mastering level when they have a shiitload of tracks isn't only ruining the sound 
wink_face.gif
. maybe because other mics just subjectively sound nicer? maybe because I have a giant head full of water and the distance between the mics is done for the more average head? maybe because I like warm signatures too much on headphones when I should use some overly bright stuff for binaural? maybe all of the above, I really have no idea. it just doesn't work on me the way I expect it to work.
 
first world problem, but I don't really have any other type of problems those days.
 
Dec 19, 2016 at 10:10 AM Post #69 of 241
Well it looks like I stuck a stick into an ant hill, very good.  
 
My post was intended to be a simple a possible given the readers I find here and to provoke the thousands who have invested in the minimal and cheapest recurring equipment and techniques to be motivated to upgrade, learn, and improve.
 
Thank you both for you corrections and opinions, it is good see that there is some still intellect present in an otherwise barren world of audio illusion.
 
As this relates mostly to recording, it is my hope that such passionate explanations encourage the many armatures in the profession to consider improving their skills and understanding of the art and its applications.
 
As this relates to the mechanization of the process of headphone and speakers it provides insight and consideration for further work.
 
Thank you, Barry
 
Dec 19, 2016 at 10:33 AM Post #70 of 241
One thing I've noticed recently is that other than extreme hard panned tracks (e.g. Early Beatles stereo recordings) lacking crossfeed doesn't bother me when I'm walking around. But as soon as I'm stationary, crossfeed is immediately basically necessary. I *think* this is because when walking the dog around the block with iems my brain doesn't even bother with trying to fit the music into the "real world" it just sort of gets the artificiality. But when sitting down and stationary, it starts to want a real, continuous soundstage/soundfield.
 
Dec 21, 2016 at 8:29 AM Post #72 of 241
Originally Posted by castleofargh /img/forum/go_quote.gif
 
[1] ... my personal vision is that the industry isn't all that focused on making an album of the sound as it was from seat C14. it's one big fantasy for the elite audiophile, but the pros don't seem to really prioritize that aspect of sound. the aim seems to be usually more about getting a clean sound and a subjectively good sounding result.
 
[2] ... having Neil Young's wet dream about the sound like the band played it ...

 
1. There are different explanations here, depending on what sort/genre of music you're experiencing in seat C14.
If we're talking about classical music, say a symphony, then we have to define what you mean by "sound as it was from seat C14". If for example you mean "sound as it would to an audience member sitting in seat C14" then there is no different focus, a clean + subjectively good sounding result and what an audience member would perceive in C14 are (in intention) exactly the same thing. If, on the other hand, you mean "the actual sound waves which would hit an audience member sitting in seat C14", then yes, you're correct, that's commonly not the primary focus of the producer, from about the '80's onwards.
If we're talking about most pop/rock concerts, then the situation is far more bizarre because there's hardly any shred of reality to start with and the actual sound waves hitting our audience member in C14 is a highly compromised re-creation of that artificial/manufactured reality! It's highly compromised for two main reasons: A. Venue size, the necessary positioning of speaker stacks and audience positioning relative to those stacks means that the stereo image/illusion would completely fall apart for all but a very small minority of the audience. A musician hard panned to the right would sound very noticeably too quiet or would be completely inaudible to the left side of audience. The solution is not to use the stereo image, essentially the audience gets a mono mix, not withstanding the odd stereo effect (say delay and/or stereo reverb) and any nasty slapback echoes or other common acoustic issues of live pop/rock venues. B. It's true of most art forms that new genres evolve in response to technological innovation. This is certainly also true of popular music. From the early/mid '60s music genres evolved to take advantage of new studio technology. By the 1980's much/most popular music had evolved to the point that it was reliant on studio technology and the 90's saw genres evolve which comprised ONLY of studio technology. In other words, some, most or sometimes even all of the components of the music can only be created in a studio, they never existed as "a live performance" and cannot be performed live.
If the actual sound is so poor, why do people pay to attend these gigs? Often, just playing the music loud in a large space overcomes the SQ issues as far as most are concerned but on top of that; there's probably some very impressive eye candy (video walls, amazing lighting rigs/effects, choreographed dancers, fireworks, etc.), there's the carefully chosen warm-up act/s and the mass audience experience (expectation and excitement) and of course we get to see celebrities strut their stuff (or at least perform a pretence of strutting their stuff!). Audiences are paying for the experience of the show!
 
All of this makes rather a nonsense of what many audiophiles talk about and/or demand. It a bit like the actor who receives hate mail aimed at an evil character they played; On the one hand it's very flattering because it demonstrates that the performance/show was obviously so well done that (at least some of) the audience was completely suckered into the attempted illusion/storytelling. On the other hand, it's also a quite worrying that some are taking the illusion way too seriously and that even long after it's all over they cannot separate the fictional/illusory reality from actual reality. Not withstanding the potential security concerns, it's actually quite nice to be recognised for one's skill in having created such a convincing illusion, which obviously isn't going to happen if the audience thinks it's real and never realises that it was a deliberately created illusion. I know some people who still think The Blair Witch Project is a documentary, stitched together from actual "found footage"!
 
2. Marketing. Or maybe he's been doing it for so long that he's actually come to believe his own hype? In all fairness though, his music is effectively a mid 1960's or even earlier genre and most of it (but not all) can be performed live. Although, as his band is obviously not a live, acoustic ensemble, "how the band played it" is only true if one includes the sound engineer in the band. And, considering the recording quality of most of his stuff, he's got a bit of a nerve talking about Hi-Res. A big mac meal in a cardboard box doesn't suddenly become a cordon bleu meal if you serve it on a hand-painted bone china plate!
 
G
 
Dec 21, 2016 at 11:59 AM Post #73 of 241
This brings up a couple of thoughts.
 
Listening to non-live music is all about an illusion.
 
When I did recording is was usually related to my work, I was the US researcher and developer for a company called Foster Electric,  you know them as Fostex.  Our primary product was linear electric motors and audio people saw them as planar microphones, headphones, and loudspeakers although the big business was linear positioning such as for record and play back heads in hard drives all the way through magna-lift trains, etc.
 
I recorded to learn about my equipment.  Before I recorded a group I asked the question " Where is the listener supposed to think he is when he listens to this work?'  I asked the writer if available the same question.  This thing you created made of rhythm, melody, ensemble. and effects is supposed to do something to a person's brain  to put them in a place and time when it happens, where and what is that circumstance?  Where and when is this piece of art, this human artifact in the abstract, is to exist and why.  Most had never given much if any thought to that so the question opened many a door.  How could the rendering of this piece of music in an imaginary space-time be expanded and thus embolden the experience the listener has?
 
Reproduced music is a very special artifact of mankind, it goes beyond simple messaging or communication because mankind thinks beyond simple massaging and communication.  We 'feel' (whatever the hell that means, it is so personal), we dream, we remember, we relate, we empathize and we have Epiphanies.  It is not coldly calculated but warmly savored.
 
It is all about the illusion, and each of us perceives illusions differently, life itself is an illusion if for no other reason than we are behind real time about half a second and the result of our sensors, and with that we tackle all the philosophers from the ancient Greeks on, which is not why we are here now reading this.
 
In there essence illusions are good and can not be judged, they simply are.
 
Barry
 
Dec 22, 2016 at 6:58 AM Post #74 of 241
 
Listening to non-live music is all about an illusion.

 
In terms of POV/audio perspective/soundstage this statement is true, however it's also true of virtually all commercial audio content, to a greater or lesser extent. In the case of most popular music genres it's doubly true because not only are we creating an illusion, we're creating an illusion of some abstract place, a place which doesn't and actually couldn't exist. It would not be uncommon on a rock track to: Have a fairly big, medium type hall reverb on the snare drum, little or no reverb on the kick drum, a big stereo delay on the lead or rhythm guitars (implying a very large hall/arena), a plate or smaller room type reverb on the lead vocal, a bigger plate or room or on the backing vocals, etc. The end result is hopefully something which sounds like a performance, even though it was probably recorded as whole bunch of different, individual performances, but even if it does sound like a performance, where is this performance taking place? It's a deliberate mish-mash of all different acoustic places/spaces which simply couldn't exist simultaneously in the real world. This sort of thing has been standard practise in rock/pop production for decades and why it ends up working, rather than the brain just rejecting it as utter nonsense, I don't believe is currently well understood.
 
Even if we're talking about live acoustic music, say (again) a symphony orchestra, standard practise would be to record a stereo pair (or an array) plus some spot mics. However, this causes an acoustic discrepancy, we've got the spot mic'ed instrument/s sounding like they're only half a meter from the listening position rather than the many meters away in the array (and how they would be in a real, live performance). In practise we'd have to move the spot mic recordings later in time and maybe apply some EQ and reverb to eliminate/reduce this acoustic discrepancy and manufacture the illusion of a single coherent acoustic space/listener position.
 
  It is all about the illusion, and each of us perceives illusions differently ...

 
This isn't strictly true. The reason it's not strictly true only becomes clear when we start looking in detail at what is illusion, at what is just a creation of perception rather than an actual physical property of sound. This is a rabbit hole, a much deeper rabbit hole than most even suspect, a deeper rabbit hole than even many of those whose livelihood depends on these illusions are aware! Musicians are well aware that harmony (and harmonic progression) is an illusion/perception, a perception which is manipulated to create emotional responses and is a fundamental compositional tool. However, far fewer musicians realise that the fundamental building blocks of harmony, the music "note", is itself just a perception. We may have different perceptions of a particular note, some for example might know or be able to deduce the pitch of the note but that it is a "note" is a perception we all share and therefore take it's existence for granted. It turns out that many of the things we take for granted don't actually exist, we just assume they exist because so many of us share those perceived illusions. Loudness would be another good example; we might have differing opinions/perceptions of how we quantify loudness, of what is too loud (or too quiet) compared to someone else's opinion but the fact that loudness exists is not generally questioned because we all share the same basic perception of the illusion of loudness. Generally, in order to survive in society, we need to accept these shared perceptions/illusions are in effect "real", even if we know them not to be. So generally, whether we know or even question what is real and what is a shared illusion/perception doesn't make much difference to how we live our lives and many people therefore simply don't question, they're either just not interested and/or deliberately avoid such questions because they represent a threat to their understanding of the world and their place within it. This is the root of virtually all disputes with the extreme audiophile community, their absolute certainty that perception and reality are the same thing and they'll treat anyone who questions that understanding as a threat, regardless of the amount of irrationality the defence of thier position requires! Unfortunately for them, that leaves them open to all sorts of manipulation and abuse from those with snake oil products to sell but on the fortunate side, that same understanding also means they'll probably never realise they've been fleeced!
 
  Before I recorded a group I asked the question " Where is the listener supposed to think he is when he listens to this work?'

 
That's not generally a pertinent question with music production, as it's generally just assumed that the listener is in an actual (or some idealised) position in the middle of and some short distance back from the real (or illusory) performance stage. More pertinent might be where we decide to position individual performers/instruments on that stage, although technicalities may dictate some of those positions and convention some others. Your question is far more pertinent in film though: Typically, listener perspective is the same position as the camera (viewers' POV/perspective) but this isn't always the case, there are occasions where we position the listener closer than the position of the camera. Sometimes we deliberately employ a conflict, where certain elements of the sound mix are positioned closer than they should be relative to the listening position of the rest of the sound mix and occasionally, we take an entirely different perspective, say a character POV. We maybe seeing a character centre screen and some distance in front of us but hearing from that character's point of view, as if we were that character. This can often present some interesting sound design opportunities as we (the audience) are now hearing the soundscape as filtered by that character's consciousness. We almost always have some sound design opportunities though, because the requirement is to create a believable soundscape, not a perfectly accurate/real one, and the difference between real and believable is the area where sound design operates, in order to manipulate the audience.
 
G
 
Nov 29, 2017 at 2:02 AM Post #75 of 241
I always listen to music through headphones with a crossfeed. 112dB Redline Monitor is the best, in my opinion. Isone Pro is also good, but it sounds too boomy.
 
Status
Not open for further replies.

Users who are viewing this thread

Back
Top