Multi-Mic'ing and a Coherent Soundstage
Jun 20, 2018 at 12:17 AM Post #46 of 60
You have a good point. I don't find the insults and other extraneous back and forth entertaining--quite the opposite--I find them unpleasant. I just would like to know what the core point of the argument is, sans the extraneous stuff. If you've got a handle on that and could explain to to me I would be grateful.

Edit::)

So we are talking about mic placement and producing a coherent sound stage. A practitioner knows how to get it done in the real world. A theorist could be extremely instructive on a more abstract level, but he's not going to know if you do A and B that sounds really nice in the studio recording. We have to get past the discussion as to whether one type of knowledge is inherently superior to the other. It's not constructive. Either person could run circles around the other on his own playing field.

I am seeing a) a discussion about sound stage and sound localization, b) one person who knows how it is achieved in practice in a recording setting, c) one person with a theoretical engineering background on the subject, d) crossfeed coming into the dispute, and e) the idea of preserving artistic intentions.

a) is the over-arching fascinating and highly complex topic
b) includes real-world experience and is again a fascinating and complex subject--as one who really likes listening to music, learning how recordings are made is very engaging to me.
c) involves a theoretical background that may clash with real world practitioners' practical experience or common practices--it appears that human ability for sound localization is astonishing
d) crossfeed--because headphones will never be speakers, and headphones will never give you the same sound as speakers, I like a little, just to take the edge off of some of the exaggerated effects, and to me if someone else doesn't like it, or likes a different kind, I've got no issue with that, and
e) is a bit of a red herring if you are using headphones, and is an extremely loaded subject in general, even with speakers. Whether a consumer should be focused on preserving artistic intent or desires to preserve artistic intent is a minefield for argument. I'll take a pass. Getting to artistic intent is not going to happen on headphones, so perhaps it is better left out of the argument, or at least focused on data showing what people prefer, if that is available, or each person coming into the discussion and dispassionately stating their preference on the subject as it relates to headphones, and maybe it's like I like spaghetti and you prefer pizza, and there's no need to argue about which is better. Bad analogy, I know, but I am just throwing something out there. For speakers, how a recording can get there and how a consumer can get there gets to the core of the whole discussion, but we have to grant that consumers will be all over the place in what they think about it. Many will shrug their shoulders and say, hey, I like this music.

For speakers, again, preserving artistic intent on the consumer end is a topic where each person should perhaps state his or her personal preference and agree to disagree if that's where it comes down. Me, I'm kind of like, to a point, I do want to get close, but I am going to interject my sound preferences and some shortcuts as well, partly because I have no technical expertise. When I sit there and I say, wow, I like this, I am pretty much good to go. I do like to experiment with it out of curiosity too. I don't think most people are like that.

The heated subtext of the argument seems to be practical experience versus theoretical training and which is more authoritative. That goes on all the time in a lot of fields. Professors versus practitioners, etc. It's an argument waiting to happen, and it's often not very fruitful. If if could be left off to the side it would be helpful. Everyone has a lot to bring to the table. Both individuals have shown great ability and knowledge in their fields when they are on top of their game.

For the consumer, for speakers, I have one humble piece of advice if you want to get to artistic intent or back to the recording studio (figuratively)--get a nice subwoofer! And use it tastefully! think mine was like $500 but a low E on a bass is an effortless LOW E ON A BASS (i.e., about 41 hz, straight up, not inferred from harmonics), and the below 41 hz content, if it's there, really adds to the visceral effect. I feel much more like I'm getting closer to the studio.

So that's what I've got.

Or maybe this was lot of effort for little practical benefit. Well, actually I learned a ton. The professional tools are extraordinary and mind-blowing and the theory is extraordinary and mind-blowing. So thanks to everyone for that aspect of it. As for crossfeed, I seem to prefer the "H" topology, or the Meier, and when the theory is explained to me, the idea that the Meier is somewhat adaptive and won't even mess with a mono signal at all and is based on simulating a wider sound angle and seems to have less of a tendency to color the sound seems pretty sophisticated. I'm sort of drawn to what appears to be the more conservative approach. Maybe that represents a closer match to artistic intent. On the other hand, artistic intent on headphones is a tough nut and maybe I am biased because once Jan Meier was once very kind to me and patient with me and worked with me on making a three-setting crossfeed in an amp suited just to my preferences.

You have a hell of a lot more patience than I do. I think it's a waste of time because you have to listen and attempt to understand to learn, and I don't see anything remotely resembling that going on here. But if you find it entertaining, I guess it has a purpose after all.
 
Last edited:
Jun 20, 2018 at 6:00 AM Post #47 of 60
....
 
Last edited:
Jun 20, 2018 at 6:49 AM Post #48 of 60
Great post Steve999!

The equation for ITD is corrupted in your post:

ITD = r * (θ + sin θ) / c

r = radius of head (0.085 m)
c = speed of sound (345 m/s)
θ = angle of sound in rads

For example sound comes form 45° angle ( = π/4 rads). ITD = 0.085 * (π/4 + sin(π/4)) / 345 = 368 µs.
 
Jun 20, 2018 at 8:16 AM Post #49 of 60
d) crossfeed--because headphones will never be speakers, and headphones will never give you the same sound as speakers, I like a little, just to take the edge off of some of the exaggerated effects, and to me if someone else doesn't like it, or likes a different kind, I've got no issue with that, and
e) is a bit of a red herring if you are using headphones, and is an extremely loaded subject in general, even with speakers. Whether a consumer should be focused on preserving artistic intent or desires to preserve artistic intent is a minefield for argument. I'll take a pass. Getting to artistic intent is not going to happen on headphones, so perhaps it is better left out of the argument, or at least focused on data showing what people prefer, if that is available, or each person coming into the discussion and dispassionately stating their preference on the subject as it relates to headphones, and maybe it's like I like spaghetti and you prefer pizza, and there's no need to argue about which is better. Bad analogy, I know, but I am just throwing something out there. For speakers, how a recording can get there and how a consumer can get there gets to the core of the whole discussion, but we have to grant that consumers will be all over the place in what they think about it. Many will shrug their shoulders and say, hey, I like this music.
d) You acknowledge the benefits of some crossfeed and you feel it improves your listening experience. That's good. I think in general even those who love crossfeed and skeptical about stronger crossfeed, because they feel it narrows the sound too much. Often that is the case, because spatiality is reduced something smaller than it is in reality. Proper crossfeed reduces excessive stereophony to natural stereophony. If you reduce spatiality more, you just take steps toward mono sound. Recordings have different amounts of excessive stereophony. Since you have a 3-level Meier crossfeeder, I encourage you to test (with time because of excessive stereophony deafness), which level suites best for each recording. Crossfeed level matters, and when you get the level right for the recording the result is spatiality-wise rewarding. Acoustic crossfeed with speakers is much stronger than typical crossfeed level and probably stronger than the strongest option in your 3-level Meier crossfeed. With speakers, reflections and reverberation make the total sound a bit wider, so that's why you usually want milder crossfeed with headphones to compensate for that. Playing with the crossfeed levels teaches one to hear when the crossfeed level is near the proper crossfeed level giving the best results. It's kind of learning to hear when the level of bass is correct.

e) Well, I am not a sound engineer, but certainly artistic intent is a collection of different sonic properties? I suppose headphones are worse than speakers with some of these properties (such as size of soundstage/depth of sound), but even better with some other. I'd say for many, headphones give closer presentation of studio acoustics (where the recording was mixed) than speakers in a typical reverberant living room.

As for crossfeed, I seem to prefer the "H" topology, or the Meier, and when the theory is explained to me, the idea that the Meier is somewhat adaptive and won't even mess with a mono signal at all and is based on simulating a wider sound angle and seems to have less of a tendency to color the sound seems pretty sophisticated. I'm sort of drawn to what appears to be the more conservative approach. Maybe that represents a closer match to artistic intent. On the other hand, artistic intent on headphones is a tough nut and maybe I am biased because once Jan Meier was once very kind to me and patient with me and worked with me on making a three-setting crossfeed in an amp suited just to my preferences.

"H"-topology crossfeeders tend to have an "impressive" sonic signature and many like Meier's crossfeeders. Nothing wrong with that. I find it "impressive" too. My smallest/simplest DIY crossfeeder has been one for a portable player:

ipodcf.jpg


This tiny crossfeeder has 2 crossfeed levels and it is the simplest possible "H"-topology design except for the 2-level preperty. As I have used this one outdoor it's not-so-perfect performance isn't really an issue. On the contrary, the environmental sounds are incorporated to the crossfed music with helps hearing to get an illusion of real soundstage. The result can be amazing when listening to for example church music (e.g. J.S. Bach's cantatas). The environmental sounds makes it all sound very big! But it works only with certain kind of music.

"X"-topology crossfeeders are less impressive and kind of give the stage for the music to impress the listener. I like the calmness and naturality of that. "H"-topology is kind of a popcorn flick with stunning visual effects, while "X"-topology is a smaller budget art movie with high quality writing. Both have their place in my heart.
 
Jun 20, 2018 at 8:30 AM Post #50 of 60
[1[ The heated subtext of the argument seems to be practical experience versus theoretical training and which is more authoritative. That goes on all the time in a lot of fields. Professors versus practitioners, etc. It's an argument waiting to happen, and it's often not very fruitful.
[2] I am seeing a) a discussion about sound stage and sound localization, b) one person who knows how it is achieved in practice in a recording setting, c) one person with a theoretical engineering background on the subject, d) crossfeed coming into the dispute, and e) the idea of preserving artistic intentions.

I agree that it does seem like that, "professors vs practitioners" and this is precisely a part of why I am getting angry with 71dB. He is deliberately misrepresenting the facts to make it seem like this! 71dB is not a professor, he simply learned some of the theory as a student. On the other hand I was actually a university professor (or more precisely, a senior lecturer) for a number of years and was responsible for teaching the theory to degree students AND, 71dB knows this from prior exchanges! This is just one example of many and not even the worst example.

Therefore:
(2b) You are seeing one person with many years of practical experience in a recording setting, a good understanding of the theory and engineering, and a good idea of the crossover between the two. In fact, this is true of most/all experienced professional sound and music engineers, we are after all called "engineers" and a good understanding of the theory is therefore a fundamental requirement!
(2c) Another person with some engineering background (in a related subject) but virtually no knowledge of the theory or engineering aspects of recording and mixing, little/no knowledge of the actual practical application either, little/no knowledge of the theory or practice of music construction/creation AND also therefore, little/no understanding of the relationships between all these things or how they impact each other. None of this is really in dispute, 71dB explicitly or tacitly admits all this! His argument is essentially: None of this is of any importance/value, the only thing of importance is the laws of how sound reacts/propagates in the real world and how we hear that "real" sound.

This argument fails in two respects:

Firstly, virtually no commercial music recordings adhere to those laws in the first place! Music recording/production started moving beyond that even as early as the 1950's, when it was discovered that consumers preferred enhanced recordings, recordings which more closely matched what would be perceived at a live acoustic music event, rather than what would actually be heard. Starting in the 1960's this concept was taken to a whole new level and modern (pop/rock and other) genres evolved where there is not even any concept of a "real" sound/acoustic space! The acoustic space is a complete fabrication, a mish-mash or various different real and artificial acoustic/spatial information all occurring simultaneously. A bit like a cubist painting in a sense but usually (depending on genre and intention) mixed and processed in such a way that it doesn't appear quite so obviously "Cubist". 71dB's response to all this is effectively: Artists/Engineers are uneducated and/or "blind" to the fact they are breaking the rules/laws of hearing and "spatiality", the recordings are not "proper" and that engineers/artists should be allowed to do whatever they want artistically as long as they don't break these rules/laws. The problem with that of course is that it effectively eliminates pretty much ALL popular music genres of the last 50 years or so!

Secondly, crossfeed doesn't fix the issues with HP presentation anyway! You would in theory need two things to fix it: 1. An accurate (for the individual) HRTF: Simply put, a HRTF is a fairly complex series of equations that combine to form a "transfer function" which attempts to account for the effects of the piinna, torso and head itself, in terms of timing/phase differences between the ears, level differences and a complex frequency response curve required for the absorption and reflection characteristics of the head, torso and pinna. 2. Even a theoretically perfect HRTF would still only get us part of the way there, in effect it would be like hearing a recording through speakers in a perfect anechoic chamber, which would sound pretty strange as recordings are not made in anechoic chambers or intended to be reproduced in them. So in addition to the HRTF we would need to apply a room reverb.

Compared to the sophistication of what is actually required (HRTF + reverb), crossfeed is extremely crude and 71dB's fixation on just one simple part (the inter level difference, ILD) of just the HRTF does NOT agree even with just the theory of how we hear sound localisation! All this stuff about crossfeed making spatial information more "real", "natural", "proper", accurate or whatever is a red herring, complete nonsense in effect, not least because there is no "real" in the first place!! This doen't mean there cannot be some perceived benefit of simple crossfeed but it depends on the material you're trying to crossfeed, your personal perception and personal preferences.

G
 
Jun 20, 2018 at 8:55 AM Post #51 of 60
Edit: Here's an academic PDF, "Introduction to HRTFs," that seems to go over some of the relevant subject matter in broad outline form. Some of it I get, and some of it is, well, math.:kissing_smiling_eyes:

http://legacydirs.umiacs.umd.edu/~ramani/cmsc828d_audio/HRTF_INTRO.pdf

Okay, I hate to drag this down to my level, but:

1) it seems to me that at subwoofer frequencies (I am ball parking 41 hertz and below based on modern setups, but I know 80 hz is often the recommended setting to mesh with modern home stereo technologies) on speakers at that point our localization of the sound is greatly reduced, but a subwoofer can greatly add to realism or artistic intent for the consumer by directly representing what is there rather than left to be inferred from harmonic cues or left out altogether. With headphones on the other hand directionally placed lower frequencies can really mess with one’s perceptions. Now this is based on practical experience as a consumer and some reading. Is it wrong in some respects? (Yes, I am inviting you guys to criticize me! Please be gentle.)

2) What In layman’s terms (not math) is ILD and where does it fit into creating a coherent soundstage. I do not expect you guys to agree on this but I just plain don’t get it.

3) Do the incredibly complex professional mixing tools automatically incorporate ILD effects, as a matter of course?

I am just throwing out what I am wondering and not getting as a layperson

4) I think I get HRTF more ore less, to the extent I am going to. How the sound hits your body and your ears affects how it gets to your ears and to your brain, and is an important part of hearing perception. True? Is there some dispute here as to whether HRTF concerns swamp ILD effects?

That’s about where I am, I think. The best I can tell you about my relevant knowledge on the subject, other than being a hobbyist, is that the toughest course I ever took in college was engineering statistics—I did it to challenge myself because I did very well in and was fascinated by calculus, but engineering statistics was a step beyond for me, incorporating every single field of math I had ever come across in new ways. And I suppose I play, or did play, a couple of instruments very badly and was a music minor in college and played in group performance settings just a little. So that was my limit on that front. Anyone who makes it beyond that has my sincere respect. And I’ve heard a ton of great jazz in person (before many of the innovators started passing away) and a decent amount of live classical and to some lesser extent live major-league pop music (honestly the concerts are just plain too loud for me sometimes and I don’t want to put myself through that). I am always surprised at the raw talent of the big-league pop musicians that is evident when you see them in person in their prime.. But that’s another subject.
 
Last edited:
Jun 20, 2018 at 11:37 AM Post #52 of 60
the basic principle of panning is simplified ILD set by ear.

I remember reading some paper where ITD was almost irrelevant when it came to subject trying to find the lateral direction of a sound source. not that ITD alone can't help locate things, it sure does. simply that the accuracy from having only ILD cues was basically the same as having ILD+ITD.
which is something audio engineers must have known for a long time to decide that a pan knob per track was good enough to make stereo out of mono.

same paper suggested that head tracking was the real deal and the only solution really giving almost 100% accuracy in locating the right source(including front and back, the real tricky stuff when you don't have the signature change from your own ear).


all that obviously only matters if we start with the idea that the record has accurate direction to begin with. which is really not a viable assumption in practice. so the other approach is to consider that stereo speakers have the right presentation and position cues, then we move from that to headphone presentation and see what changes. it's a different set of reference but at least we sort of know where we are. speakers usually set at 60°, so with crossfeed or HRIR, we can somehow settle with only 2 fixed positions and just run the music through that compensation. but if head tracking is involved and logically it should for anything to feel natural. then we need a full set of compensations and the closer to actual complete HRTF, the better our experience.

usually the heart of the debate with @71dB is that he argues that headphone playback is plain wrong and anything compensating for some cues is an improvement. which is correct objectively. where a bunch of people disagree, it's the assumption that partial clues necessarily work better than no clues subjectively. and that's a more delicate matter, as there are many known examples of stuff done wrong that feel more natural or maybe pleasing to people. when presented with some correct cues and other conflicting with that first information, is the result always going to be better than the brain fully knowing things are nonsense and just listening to the music without bothering too much with placement? the answer probably is "it depends". and when we ask people who they feel about crossfeed, that seems to be the answer too. in favor of 71dB, many people simply don't know how to properly set their crossfeed or use one without the option for a proper setting. so with proper settings, it is likely that more people would call it an improvement. but not all, and certainly not on all musics. in the end crossfeed is still just a partial approximation of localization cues. not less, but also not more.


edit: a bunch of my statements about the paper are from memory and don't include consideration about the frequency of the signal, for accurate representation it should have but I went for the easy to get points. shoot me if you deem it necessary, I deserve it.
 
Last edited:
Jun 20, 2018 at 12:09 PM Post #53 of 60
Thanks. That's super-helpful.:beerchug:

I still want to know about the bass thing though, and the subwoofer thing, and how a subwoofer fills in those lower frequencies better, and how headphones mess up the general non-directionality of lower-mid-to-low bass frequencies that exists in open space. I honestly think that's an important piece of getting back to the intent of the recording and the artist. For me a subwoofer is a really quick and efficient way to help you to get there, including in terms of soundstage (assuming that what you have set up already is in reasonable shape).

the basic principle of panning is simplified ILD set by ear.

I remember reading some paper where ITD was almost irrelevant when it came to subject trying to find the lateral direction of a sound source. not that ITD alone can't help locate things, it sure does. simply that the accuracy from having only ILD cues was basically the same as having ILD+ITD.
which is something audio engineers must have known for a long time to decide that a pan knob per track was good enough to make stereo out of mono.

same paper suggested that head tracking was the real deal and the only solution really giving almost 100% accuracy in locating the right source(including front and back, the real tricky stuff when you don't have the signature change from your own ear).


all that obviously only matters if we start with the idea that the record has accurate direction to begin with. which is really not a viable assumption in practice. so the other approach is to consider that stereo speakers have the right presentation and position cues, then we move from that to headphone presentation and see what changes. it's a different set of reference but at least we sort of know where we are. speakers usually set at 60°, so with crossfeed or HRIR, we can somehow settle with only 2 fixed positions and just run the music through that compensation. but if head tracking is involved and logically it should for anything to feel natural. then we need a full set of compensations and the closer to actual complete HRTF, the better our experience.

usually the heart of the debate with @71dB is that he argues that headphone playback is plain wrong and anything compensating for some cues is an improvement. which is correct objectively. where a bunch of people disagree, it's the assumption that partial clues necessarily work better than no clues subjectively. and that's a more delicate matter, as there are many known examples of stuff done wrong that feel more natural or maybe pleasing to people. when presented with some correct cues and other conflicting with that first information, is the result always going to be better than the brain fully knowing things are nonsense and just listening to the music without bothering too much with placement? the answer probably is "it depends". and when we ask people who they feel about crossfeed, that seems to be the answer too. in favor of 71dB, many people simply don't know how to properly set their crossfeed or use one without the option for a proper setting. so with proper settings, it is likely that more people would call it an improvement. but not all, and certainly not on all musics. in the end crossfeed is still just a partial approximation of localization cues. not less, but also not more.


edit: a bunch of my statements about the paper are from memory and don't include consideration about the frequency of the signal, for accurate representation it should have but I went for the easy to get points. shoot me if you deem it necessary, I deserve it.
 
Last edited:
Jun 20, 2018 at 12:48 PM Post #54 of 60
Signal processing is great. If you like it, use it. No one will tell you not to. But it's signal processing, it isn't signal restoration. The only way to hear it "the way the engineers intended it to sound" is to listen to it the way the engineers did. Anything else is a compromise. It may be a convenient compromise, and you may actually prefer the alteration of the sound better than the way it originally sounded. Try it. If you like it, keep doing it. Suggest other people try it too. All that is great. Just don't suggest that your altered sound is somehow closer to unaltered. That's what gets the circular arguments started. And ultimately, it doesn't even matter whether a signal is altered or not. Listeners can do whatever they want to the sound they put in their ears.

The real problem here isn't with the method of reproducing sound. It's with the method of communicating with others. There's too many bubbles here with people holding extended conversations with themselves and talking at other people. It's unnecessary clutter. What I do when someone keeps blathering on about a single topic and making the same points over and over again without listening is I tune those people out. If you don't want me to read your posts past the first line or two, posting twelve paragraphs of unorganized rehash is a great way to accomplish that. I think it's better to make a succinct, organized, easy to read paragraphs without tearing the post I'm replying to into tiny shreds of disjointed quotes. I also try to avoid hyper emotional attacks or attempts to elicit pity.

The points made on these three subjects that keep overpowering this group have been made a hundred times now. Everybody has a very good idea of how the whole thing could be summarized and disposed of so we can move on. But for some reason, folks are focused on dragging out their pet topic/dead horse over and over and whip up a flood of redundant words that buries all other conversations. It's really not a problem of philosophy or science. It's a problem with ego and people being too in love with their own words. I wish we could focus on ideas again.
 
Last edited:
Jun 20, 2018 at 2:01 PM Post #55 of 60
1) it seems to me that at subwoofer frequencies (I am ball parking 41 hertz and below based on modern setups, but I know 80 hz is often the recommended setting to mesh with modern home stereo technologies) on speakers at that point our localization of the sound is greatly reduced, but a subwoofer can greatly add to realism or artistic intent for the consumer by directly representing what is there rather than left to be inferred from harmonic cues or left out altogether.
[1a] With headphones on the other hand directionally placed lower frequencies can really mess with one’s perceptions.

1. That's essentially correct. There's a few points to consider though: A. Our hearing becomes progressively less sensitive in the low frequencies and we need high levels of LF to hear anything, with your example of say 41Hz, the high levels are likely to be more physically felt (on the body) than aurally heard through the ears. In a big live pop/rock gig there are typically tens of thousands of watts worth of subs but how much is very genre specific. For example EDM and electronic genres commonly make very specific use of subs while heavy metal isn't so specific but requires a fair amount of sub power for the "heavy" kick drum thump. Of course, there's nothing much headphones can do about this, with or without crossfeed or HRTF. That physical feeling can only be achieved by actually having a sub. B. With many music recordings there's little or nothing down below about 50Hz which is of any particular concern, it's mostly unwanted noise. C. It's relatively rare to have large amounts of wanted low frequency content panned/positioned very far from the central position. There's little to be gained by doing this with speakers, as we're so insensitive to the directionality of low frequencies from speakers (or in real life). For this reason, it doesn't really matter where we place the sub in a room (or at a gig) as far as directionality/localisation is concerned, positioning isn't at all critical as it is with the left and right speakers.

1a. Intrinsically, all music is specifically designed to "really mess with one's perceptions", indeed, the very existence of music relies on this fact. So the question becomes: Is it intentional that your perception is really being messed with,with the directionality of low freqs in HPs? There's no real way to know for sure, if one has a good understanding of music production techniques one can usually have a pretty good guess.

2) What In layman’s terms (not math) is ILD and where does it fit into creating a coherent soundstage. I do not expect you guys to agree on this but I just plain don’t get it.

2. ILD - Interaural Level Difference. The difference in the level of a sound between each ear. A sound to our right will hit our right ear before it hits our left ear and have a higher level. This time difference is called ITD (Interaural Time Difference) and the level difference is called ILD. In addition, the sound that hits our left ear would have a different spectral frequency content, due to "head shadow", the head masking/absorbing various freqs. In fact there's various effects that occur and which the brain uses to determine sound location and to complicate matters further, the relationship between all these factors vary with frequency. For example, for frequency content above about 1600Hz, ILD plays a far greater role in sound localisation than ITD, then there's a transition zone roughly between 900Hz and 1600Hz below which ITD plays the greater role. A "coherent soundstage" is a difficult thing to pin down, in reality virtually no commercial music recordings have a coherent soundstage and typically not even vaguely close to one but many recordings are perceived to be at least moderately close to a coherent soundstage. Without going into a lot of detail about music production techniques and artistic intention, it's difficult to explain why this so and how it's achieved.

3) Do the incredibly complex professional mixing tools automatically incorporate ILD effects, as a matter of course?
4) I think I get HRTF more ore less, to the extent I am going to. How the sound hits your body affects how it gets to your ears and to your brain, and is an important part of hearing perception. True?
[4a] Is there some dispute here as to whether HRTF concerns swamp ILD effects?

3. Actually, ILD is just about the most fundamental and basic of mixing tools. Creating level differences between left and right channels is what a "pan pot" does, and along with gain (faders), pan pots are the most fundamental of tools, found on even the simplest, oldest of stereo mixing desks. Manipulating level differences between left and right is relatively easy mathematically/electronically, works fairly universally, both with speakers and HPs, and is far more tolerant to non-perfect positioning (of speakers and the listener relative to the speakers) than are timing differences, although of course, while it works with HPs, it's represented differently. Starting in the mid 1960's or so, timing difference information was also employed, in addition to level differences. Although it was relatively rudimentary/limited to start with, more options and sophistication were discovered/invented throughout the 1960's and different genres became more reliant on it and it took another big jump in the 1970's when digital processing of timing information became available. Even today though, the pan pot is still the primary left/right positioning tool, although it's virtually always supported with some form of timing information.

4. Essentially true, although the main part is not so much how it gets to the brain but more about what the brain does with the information it receives.
4a. Not really, ILD is part of a HRTF, so that's a bit like asking; does a car swamp the effects of it's engine? The fundamental dispute is that 71dB believes (and has stated) that ILD is the ONLY important parameter of music recording creation and reproduction and unless it is within the boundaries of ILD levels which would occur naturally then it is fundamentally wrong/improper and is effectively due to the ignorance of the music engineers. The specific argument about HRTF and ILD is that 71dB misrepresented ILD measurements as HRTF measurements, in order to increase the importance of ILD and justify his obsession with it. That was a deliberate lie because he knows very well the difference between ILD and HRTF. Eventually, after I called him out on it several times, he still wouldn't openly admit he was lying but conceded that it as an "oversimplification"!

G
 
Jun 20, 2018 at 2:50 PM Post #56 of 60
That was awesome! Thank you so much for taking the time and effort, and it was a tremendous amount of learning for me in a very short time. I believe I understood every word of it. :)

1. That's essentially correct. There's a few points to consider though: A. Our hearing becomes progressively less sensitive in the low frequencies and we need high levels of LF to hear anything, with your example of say 41Hz, the high levels are likely to be more physically felt (on the body) than aurally heard through the ears. In a big live pop/rock gig there are typically tens of thousands of watts worth of subs but how much is very genre specific. For example EDM and electronic genres commonly make very specific use of subs while heavy metal isn't so specific but requires a fair amount of sub power for the "heavy" kick drum thump. Of course, there's nothing much headphones can do about this, with or without crossfeed or HRTF. That physical feeling can only be achieved by actually having a sub. B. With many music recordings there's little or nothing down below about 50Hz which is of any particular concern, it's mostly unwanted noise. C. It's relatively rare to have large amounts of wanted low frequency content panned/positioned very far from the central position. There's little to be gained by doing this with speakers, as we're so insensitive to the directionality of low frequencies from speakers (or in real life). For this reason, it doesn't really matter where we place the sub in a room (or at a gig) as far as directionality/localisation is concerned, positioning isn't at all critical as it is with the left and right speakers.

1a. Intrinsically, all music is specifically designed to "really mess with one's perceptions", indeed, the very existence of music relies on this fact. So the question becomes: Is it intentional that your perception is really being messed with,with the directionality of low freqs in HPs? There's no real way to know for sure, if one has a good understanding of music production techniques one can usually have a pretty good guess.



2. ILD - Interaural Level Difference. The difference in the level of a sound between each ear. A sound to our right will hit our right ear before it hits our left ear and have a higher level. This time difference is called ITD (Interaural Time Difference) and the level difference is called ILD. In addition, the sound that hits our left ear would have a different spectral frequency content, due to "head shadow", the head masking/absorbing various freqs. In fact there's various effects that occur and which the brain uses to determine sound location and to complicate matters further, the relationship between all these factors vary with frequency. For example, for frequency content above about 1600Hz, ILD plays a far greater role in sound localisation than ITD, then there's a transition zone roughly between 900Hz and 1600Hz below which ITD plays the greater role. A "coherent soundstage" is a difficult thing to pin down, in reality virtually no commercial music recordings have a coherent soundstage and typically not even vaguely close to one but many recordings are perceived to be at least moderately close to a coherent soundstage. Without going into a lot of detail about music production techniques and artistic intention, it's difficult to explain why this so and how it's achieved.



3. Actually, ILD is just about the most fundamental and basic of mixing tools. Creating level differences between left and right channels is what a "pan pot" does, and along with gain (faders), pan pots are the most fundamental of tools, found on even the simplest, oldest of stereo mixing desks. Manipulating level differences between left and right is relatively easy mathematically/electronically, works fairly universally, both with speakers and HPs, and is far more tolerant to non-perfect positioning (of speakers and the listener relative to the speakers) than are timing differences, although of course, while it works with HPs, it's represented differently. Starting in the mid 1960's or so, timing difference information was also employed, in addition to level differences. Although it was relatively rudimentary/limited to start with, more options and sophistication were discovered/invented throughout the 1960's and different genres became more reliant on it and it took another big jump in the 1970's when digital processing of timing information became available. Even today though, the pan pot is still the primary left/right positioning tool, although it's virtually always supported with some form of timing information.

4. Essentially true, although the main part is not so much how it gets to the brain but more about what the brain does with the information it receives.
4a. Not really, ILD is part of a HRTF, so that's a bit like asking; does a car swamp the effects of it's engine? The fundamental dispute is that 71dB believes (and has stated) that ILD is the ONLY important parameter of music recording creation and reproduction and unless it is within the boundaries of ILD levels which would occur naturally then it is fundamentally wrong/improper and is effectively due to the ignorance of the music engineers. The specific argument about HRTF and ILD is that 71dB misrepresented ILD measurements as HRTF measurements, in order to increase the importance of ILD and justify his obsession with it. That was a deliberate lie because he knows very well the difference between ILD and HRTF. Eventually, after I called him out on it several times, he still wouldn't openly admit he was lying but conceded that it as an "oversimplification"!

G
 
Jun 20, 2018 at 6:58 PM Post #57 of 60
I see gregorio has been working hard here to discredit me on a comical level while I was watching Spielberg's The post on Blu-ray. :laughing:

The fundamental dispute is that 71dB believes (and has stated) that ILD is the ONLY important parameter of music recording creation and reproduction and unless it is within the boundaries of ILD levels which would occur naturally then it is fundamentally wrong/improper and is effectively due to the ignorance of the music engineers.

G

I don't think I have ever say that, but someone like you may get the impression because to me incorrect ILD is the most harmful part of HRTF in headphone listening. Not the only one, but the most problematic.
 
Jun 20, 2018 at 7:07 PM Post #58 of 60
No problem. I'm not here to take sides and I'm not here to argue. I'm here to have fun and to learn. :)

I see gregorio has been working hard here to discredit me on a comical level while I was watching Spielberg's The post on Blu-ray. :laughing:

I don't think I have ever say that, but someone like you may get the impression because to me incorrect ILD is the most harmful part of HRTF in headphone listening. Not the only one, but the most problematic.
 
Jun 20, 2018 at 7:41 PM Post #59 of 60
Thanks for the informative posts, Gregorio.
 
Jun 20, 2018 at 9:35 PM Post #60 of 60
the basic principle of panning is simplified ILD set by ear.

I remember reading some paper where ITD was almost irrelevant when it came to subject trying to find the lateral direction of a sound source. not that ITD alone can't help locate things, it sure does. simply that the accuracy from having only ILD cues was basically the same as having ILD+ITD.
which is something audio engineers must have known for a long time to decide that a pan knob per track was good enough to make stereo out of mono.

ITD dominates spatial hearing below 800 Hz and ILD over 1600 Hz. 4000 Hz and above pinna starts to create also important ISD effects. It's important to notice that below 800 Hz ILD serves as proximity cue: Higher ILD implies close sound (while other spatial cues such as reverberation or context may imply distant sound!). That's why we need to control ILD too below 800 Hz and that's why excessive ILD below 800 Hz can be such a nuisance with headphones.
 

Users who are viewing this thread

Back
Top