To crossfeed or not to crossfeed? That is the question...
Jan 11, 2018 at 12:16 PM Post #571 of 2,146
The nice thing about messed up spacial timing info in a mix is that when you play it back on speakers in your home, the sound of your own room gets overlaid, creating a layer of unified spacial timing that can mask some of the confusion inherent in the mix. That's an element that you can't predict in the studio, but it can go a good distance to making a mix sound even better.
 
Jan 11, 2018 at 12:44 PM Post #572 of 2,146
Following on from what I've just stated: In the case of acoustic genres such as orchestral music, where an illusion/perception of reality is a serious concern then whether or not crossfeeding is beneficial will depend on these variables: The various mic placements used to make the recording in the first place, the "careful adjustments" made during mixing, the further adjustments made when checking the mix on HPs and your personal perception/preference. From all this we can make certain statements/deductions:

1. One thing is certain, crossfeeding cannot and is NOT correcting/fixing "spatial distortion", it's there, baked into the recording and cannot be un-baked! All we're talking about therefore is just different presentations of that spatial distortion, not about a type of presentation which doesn't have spatial distortion.
2. While one may have a personal preference for crossfeed, it is likely to be contrary to the intent of the engineers/artists and to "fidelity", assuming the mix has been checked/adjusted on HPs. Unless of course that checking/adjusting was done with crossfeed but that would be exceptionally rare.
3. Anyone who believes/perceives that spatial distortion ceases to exist, disappears or is fixed by crossfeed is deluded and by definition NOT spatially enlightened but the exact opposite! Now, as it's all based on illusion/delusion in the first place, with any type of presentation, that's not as outrageously insulting as it appears. Nevertheless, in direct response to the quote: If you are ONLY able to "recognise and understand the benefits of crossfeed" but not able to recognise or understand it's disadvantages and not able to recognise and understand that you've still got spatial distortion, then you are CLEARLY NOT "spatially enlightened", you are (and must be) DELUDED!!! So again, enough with the "I'm spatially enlightened" BS, you're not, you're actually spatially deluded but just don't (and/or won't) realise it!

G
1. If spatial distortion is defined as excessive ILD / ITD information then crossfeed is able to reduce/remove spatial distortion. If you reduce ILD at low frequencies at 3-5 dB and about 10 dB at 1 kHz, have ILD that makes sense to human spatial hearing, you don't have spatial distortion.

If spatial distortion is defined containing also other spatial aspects then depending on what those aspects are it's possible crossfeed is unable to address them. For me excessive ILD / ITD is THE problem of headphone listening and fixing that problem is what I am after with crossfeed. The other aspects whatever they are do not matter because they don't ruin my listening enjoyment.

2. Nowadays mixes are probably "checked" * on HPs, but hardly in the 70's when people hard panned for speakers and thats it. Even today some mild crossfeed is beneficial to clean the sound, because speakers are favored. Why should we have 100 % trust on the intents of all sound engineers? Are they gods or normal human beings? Engineers may have tremendous knowledge and skills on many aspects, but my claim is knowing how to mix for headphones isn't their strongest virtue in general.

* What is checked isn't so much spatiality, but for example the sonic balance between various instruments.

3. I define spatial distortion so that crossfeed removes it. In my books spatial distortion is spatial confusion created by brain due to excessive spatial information. To me it is not the difference of intents in the studio and what I hear. I don't know what they intented. I only know what I hear and my brain evaluates how much it makes sense. If ILD at 200 Hz is 2 dB it makes sense, if it is 12 dB then it doesn't make sense no matter what the engineers intented in the studio. If it doesn't make sense I don't enjoy listening and I stop listening or I activate crossfeed to have it make sense.

Audio technology isn't even close to ready to allow us to hear exactly what engineers intented in the studio. We'd need VERY accucate soundfield synthesis around the listeners head. We don't have that. What we have is means to reproduce the recordings to certain level of accuracy allowing very enjoyable listening experience.

If you think listening without crossfeed is somehow what was intented then which headphone model gives the correct version? All headphones render the spatiality differently.

I think I know what aspects of sound we can address technologically and what aspects are relevant and you call me deluded. Life sucks and then you die. With crossfeed it sucks less. That's why I use it.
 
Jan 11, 2018 at 1:08 PM Post #573 of 2,146
I define spatial distortion so that crossfeed removes it.

I would like to define the word "debt" so that my bank account could remove it!
 
Jan 11, 2018 at 2:00 PM Post #574 of 2,146
I would like to define the word "debt" so that my bank account could remove it!

Let me guess: You are in debt because of you "better than headphones" loudspeakers ? :beyersmile:

Headphone listening suffers from the problem of excessive spatial information. To solve this problem we need to scale the spatial information so that it is not excessive. Crossfeed does that. So, there's a problem and there is a solution for it. I just call this problem "spatial distortion" and that's why crossfeed is conveniently the solution to it. I also call the stuff that after swallowing takes my hunger away food. The thing that removes your debt is called "gigantic luck in lottery/stock market."
 
Jan 11, 2018 at 3:48 PM Post #575 of 2,146
If spatial distortion is defined as excessive ILD / ITD information then crossfeed is able to reduce/remove spatial distortion.
For me excessive ILD / ITD is THE problem of headphone listening and fixing that problem is what I am after with crossfeed. The other aspects whatever they are do not matter because they don't ruin my listening enjoyment.

Thank you for so excellently proving my point! As you've clearly stated, it's all about your listening enjoyment, your perception! What "ruins" YOUR PERCEPTION and what doesn't. ... If we do indeed define spatial distortion as excessive timing delay, then there's massive time delays between all the 30+ mics used to record an orchestra, and crossfeed cannot possibly remove/reduce this without knowing:

How many mics were used in the mix? What was the distance between them all? And, what timing adjustments to all those mic inputs have the engineers already applied?

You do not know the answer to ANY of these questions, your crossfeed algorithm doesn't know the answer to ANY of these questions and even if it knew the answers to all them, it still can't deconstruct the recording and apply a correction/fix, to all those mic inputs. So no, it's complete nonsense to state crossfeed is able to reduce/remove the spatial distortion! In reality, your crossfeed isn't even attempting to address the excessive timing delays/spatial distortion, it's instead just trying to address the tiny timing delay between your ears!

What you've done is create your own definition of "spatial distortion" (which is what bigshot was referring to), a definition based SOLELY on YOUR PERSONAL PERCEPTION. Your definition appears to be that "Spatial distortion" is what "ruins" YOUR PERCEPTION and when YOUR PERCEPTION is not "ruined" there's no spatial distortion. In reality, as clearly explained, there is always tons of spatial distortion but it doesn't seem to "ruin" your "listening enjoyment" because you can't hear (are SPATIALLY IGNORANT of) it! Now, that's fine, that's just your inability to hear spatial distortion and therefore your preference. I've no objection to your preference, I just don't share it (because I can hear the spatial distortion). But I'm getting sick and tired of you making up nonsense facts and definitions, making up nonsense assertions about engineers, when you obviously don't even know the first thing about engineering and insulting everyone who doesn't share your spatial ignorance and preference!

G
 
Last edited:
Jan 11, 2018 at 4:13 PM Post #576 of 2,146
WELL! Isn't THAT spatial!
 
Jan 11, 2018 at 4:15 PM Post #577 of 2,146
Headphone listening suffers from the problem of excessive spatial information.
The qualification and evaluation of this as a "problem" is highly subjective, anything but absolute.
To solve this problem we need to scale the spatial information so that it is not excessive. Crossfeed does that. So, there's a problem and there is a solution for it.
Cross-feed is not the solution. It's a mitigation tool at best, the application of which is not universally accepted (example: your 98% preference, my preference is almost the inverse).
I just call this problem "spatial distortion" and that's why crossfeed is conveniently the solution to it.
Your definition is not universally accepted, cross-feed is not "the solution", and not universally accepted either.
I also call the stuff that after swallowing takes my hunger away food. The thing that removes your debt is called "gigantic luck in lottery/stock market."
You didn't make up "food", it's a universally accepted, proven solution to the "problem" of hunger, and an essential element in the prevention of death. It's benefits have been proven.

There are other far more acceptable and effective solutions to "debt", but the concept of debt is also not made up by you, and is universally accepted.
 
Jan 11, 2018 at 5:49 PM Post #578 of 2,146
Thank you for so excellently proving my point! As you've clearly stated, it's all about your listening enjoyment, your perception! What "ruins" YOUR PERCEPTION and what doesn't. ... If we do indeed define spatial distortion as excessive timing delay, then there's massive time delays between all the 30+ mics used to record an orchestra, and crossfeed cannot possibly remove/reduce this without knowing:

How many mics were used in the mix? What was the distance between them all? And, what timing adjustments to all those mic inputs have the engineers already applied?

You do not know the answer to ANY of these questions, your crossfeed algorithm doesn't know the answer to ANY of these questions and even if it knew the answers to all them, it still can't deconstruct the recording and apply a correction/fix, to all those mic inputs. So no, it's complete nonsense to state crossfeed is able to reduce/remove the spatial distortion! In reality, your crossfeed isn't even attempting to address the excessive timing delays/spatial distortion, it's instead just trying to address the tiny timing delay between your ears!

What you've done is create your own definition of "spatial distortion" (which is what bigshot was referring to), a definition based SOLELY on YOUR PERSONAL PERCEPTION. Your definition appears to be that "Spatial distortion" is what "ruins" YOUR PERCEPTION and when YOUR PERCEPTION is not "ruined" there's no spatial distortion. In reality, as clearly explained, there is always tons of spatial distortion but it doesn't seem to "ruin" your "listening enjoyment" because you can't hear (are SPATIALLY IGNORANT of) it! Now, that's fine, that's just your inability to hear spatial distortion and therefore your preference. I've no objection to your preference, I just don't share it (because I can hear the spatial distortion). But I'm getting sick and tired of you making up nonsense facts and definitions, making up nonsense assertions about engineers, when you obviously don't even know the first thing about engineering and insulting everyone who doesn't share your spatial ignorance and preference!

G
How does a pair of speakers know how to play 30+ mics? How does acoustic crossfeed and room acoustics know? They don't. So, why not knowing is only a problem when it's crossfeed?
 
Jan 11, 2018 at 7:00 PM Post #579 of 2,146
I agree that the majority of the time we are trying to make it "better than real".

However, there are some music genres where this isn't the case, effectively where we're trying to make it better so that it does sound real.

Quite often in audiophile discussions the topic is brought around to the comparison of a live acoustic performance, such as orchestral music, with a recorded equivalent.

The problem here is quite different to the "better than [and not even directly concerned with] real" which is the case with the non-acoustic genres.

In the case of acoustic genres such as orchestral, I would re-word the part I've highlighted in bold to: "The result would often not appear to be entirely realistic or very exciting, because what we hear at an orchestral concert is not real in the first place!" - What actually enters our ears and what we perceive are two different things. Our brain will filter/reduce what it thinks is irrelevant, such as the constant noise floor of the audience for example, and increase the level of what it thinks is most important, such as what we are looking at (the instrument/s with the solo line for example).

This isn't "real" at all, although of course it feels entirely real. Clearly, even with a theoretically perfect capture system, all we're going to record is the real sound waves but when reproduced, the brain is generally not going to perceive those sound waves as it would in the live performance because the visual cues and other biases which informed that perception are entirely different.

So, the trend over the decades has been to create a orchestral music product which sounds realistic relative to human perception rather than just accurately capture the sound waves which would enter one's ears. To achieve this we use elaborate mic'ing setups which allows us to alter the relative levels of various parts of the orchestra in mixing (as our perception would in the live performance).

However, a consequence of this is messed-up timing, as sound wave arrival times are going to vary between all the different mics (which are necessarily in significantly different positions). This is an unavoidable trade-off, we're always going to get messed-up spatial information but with careful adjustment during mixing we can hopefully end up with a mix which is not perceived to be too spatially messed-up (even though it still is).

This "careful adjustment" is done mainly on speakers but is typically checked on HPs and further adjustments may be made if the illusion/perception of not being spatially messed-up is considered to be too negatively affected by HP presentation.

This brings me back to what I stated previously, that pretty much whatever we listen to and however we're listening to it (speakers, HPs, HPs with crossfeed, etc.) we've always got messed-up timing, "spatial distortion" or whatever else you want to call it.

PS. I know you're probably aware of all this already bigshot.

(...)

G

Thought experiment:

Imagine that you record an orchestra with an eigenmic (32 capsules) placed at row A, seat 2, and you have and that you convolve the highest number possible of virtual speakers a high density HRTF. At row A, seat 3, there is a born blind listener. At row A, seat 1, there is a viewer with normal eyesight. Finally, at row B, seat 2 you have a listener that recently acquired blindness. Full audience.

Questions:

Are you saying that the viewer with normal eyesight would only perceive, with headphones playback, an soundfield identically to the one he/she heard live if, and only if, he/she uses a “perfect” virtual reality headset displaying images at where he/she were seated?

Are you saying that blind listeners cannot precisely locate sounds at the live event, for instance, identify where the soloist is playing?

Are you saying that only blind listeners would perceive, with headphones playback, an soundfield identically to the one he/she heard live?

Are you saying that accuracy to locate sounds (at least in the horizontal plane) differs from a blind listener and a blindfolded viewer who has normal eyesight?

Are you saying that blind listeners are not capable of sound selective attention (cocktail party effect)?

Do you think that the born blind listener and the listener that recently acquired blindness will achieve different sound location accuracy?

I agree that vision can in some circumstances override sound cues. I also agree that vision is normally the sense that allows to train your brain to locate sound sources with your ears and that you can retrain your brain if your vision does not match your sound cues.

But I don’t know if that is the only route to create a virtual soundfield map in your brain (or maybe is it a neural network physical simulacrum of a soundfield map?).

Someone that was born blinded can walk to his mother when she is calling “my angel”. Some are capable of echolocation. Some play blind soccer.

But I don’t know if all psychoacoustics processing phenomena are caused by visual and sound cues ambiguities.

Are you sure you can claim that?
 
Last edited:
Jan 11, 2018 at 7:07 PM Post #580 of 2,146
How does a pair of speakers know how to play 30+ mics? How does acoustic crossfeed and room acoustics know? They don't. So, why not knowing is only a problem when it's crossfeed?
Are you seriously asking this?

OK...I'll break it down simply:

1. Mixes are performed on speakers in rooms.
2. The engineer responds to what he hears by using the tools he has. His judgements are based on listening to speakers in a room.
3. There is no such thing as "acoustic cross-feed". You're making that one up.
4. The engineer hears both speakers in his room with both ears, and therefore knows what he hears, responds, and mixes accordingly. The monitoring environment is known by virtue of the fact that he's in it and using it to make mix decisions...yeah, of all 30 (or whatever) mics.
5. While the rooms used for mixing are mostly better acoustically than the typical home listening room, they are not all that dissimilar either.
6. Today's mixes are checked on headphones. If it's completely wrong, there will be a change. If it's acceptable, there won't be. Headphones comprise a very significant portion of the total listening environments.

On the other hand, when you apply headphone cross-feed you:

1. Have no idea what the intentions were when the recording was made
2. Have no idea how sources were mixed and placed, how many there were, or what the ITD or ILD is.
3. There are so many different ITDs and ILDs in use that there's no single set of ITD/ILD figures to work with
4. You are applying highly generalized cross-feed according to personal taste. That's not compensation or correction at all. It's preference.

And on the other hand, you have different fingers.
 
Jan 11, 2018 at 7:13 PM Post #581 of 2,146
Thought experiment:

Imagine that you record an orchestra with an eigenmic (32 capsules) placed at row A, seat 2, and you have and that you convolve the highest number possible of virtual speakers a high density HRTF. At row A, seat 3, there is a born blind listener. At row A, seat 1, there is a viewer with normal eyesight. Finally, at row B, seat 2 you have a listener that recently acquired blindness. Full audience.

Questions:

Are you saying that the viewer with normal eyesight would only perceive, with headphones playback, an soundfield identically to the one he/she heard live if, and only if, he/she uses a “perfect” virtual reality headset displaying images at where he/she were seated?

Are you saying that blind listeners cannot precisely locate sounds at the live event, for instance, identify where the soloist is playing?

Are you saying that only blind listeners would perceive, with headphones playback, an soundfield identically to the one he/she heard live?

Are you saying that accuracy to locate sounds (at least in the horizontal plane) differs from a blind listener and a blindfolded viewer who has normal eyesight?

Are you saying that blind listeners are not capable of sound selective attention (cocktail party effect)?

Do you think that the born blind listener and the listener that recently acquired blindness will achieve different sound location accuracy?

I agree that vision can in some circumstances override sound cues. I also agree that vision is normally the sense that allows to train your brain to locate sound sources with your ears and that you can retrain your brain if your vision does not match your sound cues.

But I don’t know if that is the only route to create a virtual soundfield map in your brain (or maybe is it a neural network physical simulacrum of a soundfield map?).

Someone that was born blinded can walk to his mother when she is calling “my angel”. Some are capable of echolocation. Some play blind soccer.

But I don’t know if all psychoacoustics processing phenomena are caused by visual and sound cues ambiguities.

Are you sure you can claim that?
Perhaps we should be asking you the question: "Is gregorio actually saying any of that?" Or do you need a lesson in reading comprehension?

Please don't dumb-down the discussion by putting words in peoples mouths that they've never said or implied. I know it's tempting because the thread is so dumb already, but resist...resist....resist.
 
Jan 11, 2018 at 7:13 PM Post #582 of 2,146
The qualification and evaluation of this as a "problem" is highly subjective, anything but absolute.
Cross-feed is not the solution. It's a mitigation tool at best, the application of which is not universally accepted (example: your 98% preference, my preference is almost the inverse).
Your definition is not universally accepted, cross-feed is not "the solution", and not universally accepted either. You didn't make up "food", it's a universally accepted, proven solution to the "problem" of hunger, and an essential element in the prevention of death. It's benefits have been proven. There are other far more acceptable and effective solutions to "debt", but the concept of debt is also not made up by you, and is universally accepted. Yada Yada Yada...

Your resistance of my posts is tiring. I happen to suffer from low self-esteem and 90 % of the time I read your responses I feel insecure of myself. That's why I don't fight back as much as I should, but now I feel it time. Just because you have worked 188 years with Elton John doesn't mean you know/understand everything better than I do. I have studied these things quite a lot as a hobby. You on the other hand demonstrate lack of understanding of spatial hearing every now and then, for example your disbelief that reducing channel separation can make the sound image wider. Any hard panned ping pong album from hell is an "artistic statement" to you not to be questioned or corrected with crossfeed.

For me it doesn't matter how "universally" something is accepted. Masses are wrong all the time and the spatiality of headphone listening is badly handled in audio community, totally ignored field that only a handful of people tackle with, and we who try to tackle it face people like you.

Excessive stereo separation is overhelmingly the biggest problem in headphone listening and crossfeed does fix it. Think about a minute of your opinions man. You refute my logically and scientifically sound claims just because you learned to listen your music hard panned in the 70's? Is that intellectually honest? I learned to listen my music with headphones without crossfeed too. I listened without crossfeed for years because I was spatially ignorant, but then it suddenly occured to me that it's wrong! It's crazy! It means excessive stereo separation. I found crossfeed and learned to listen headphones the right way and it revolutionazed my music listening habits and my enjoyment of music. All it took was my education, tendency to question things and open mind. The only weird thing is that it took me so long to realize the existence of spatial distortion.
 
Jan 11, 2018 at 7:27 PM Post #583 of 2,146
Your resistance of my posts is tiring.
Apparently, not quite tiring enough.
I happen to suffer from low self-esteem and 90 % of the time I read your responses I feel insecure of myself. That's why I don't fight back as much as I should, but now I feel it time. Just because you have worked 188 years with Elton John doesn't mean you know/understand everything better than I do. I have studied these things quite a lot as a hobby. You on the other hand demonstrate lack of understanding of spatial hearing every now and then, for example your disbelief that reducing channel separation can make the sound image wider. Any hard panned ping pong album from hell is an "artistic statement" to you not to be questioned or corrected with crossfeed.
My point is, and has always been, it's not a question of if it is artistic or not, the point is you don't know, but have taken it upon yourself to decide for the world.
For me it doesn't matter how "universally" something is accepted. Masses are wrong all the time and the spatiality of headphone listening is badly handled in audio community, totally ignored field that only a handful of people tackle with, and we who try to tackle it face people like you.
I give you push-back for one reason: you leave absolutely no room for preference, your decisions and determinations are final, anyone else is wrong. Get it?
Excessive stereo separation is overhelmingly the biggest problem in headphone listening and crossfeed does fix it.
With all due respect (which has frankly diminished a tad), I disagree. Cross-feed doesn't "fix" it, it mitigates it some times, not so much in others. The overwhelmingly biggest problem with headphone listening is erratic frequency response.
Think about a minute of your opinions man. You refute my logically and scientifically sound claims just because you learned to listen your music hard panned in the 70's? Is that intellectually honest?
That's just your opinion. I hardly listen to any of that music much now, and yet I still can't find many applications for cross-feed. That's my opinion.
I learned to listen my music with headphones without crossfeed too. I listened without crossfeed for years because I was spatially ignorant, but then it suddenly occured to me that it's wrong! It's crazy! It means excessive stereo separation. I found crossfeed and learned to listen headphones the right way and it revolutionazed my music listening habits and my enjoyment of music. All it took was my education, tendency to question things and open mind. The only weird thing is that it took me so long to realize the existence of spatial distortion.
So, then, why is it that I've spent literally days attempting to listen to cross-feed of various types and intensities, on various recordings, and yet I just don't respond the same way you do? We both have the reference of listening to speakers, and listening to the real world around us. I find cross-feed almost universally flattens the dimensional life out of recordings, makes them less involving, less immersive, less fun.

You gave it a shot and like it. I gave it a shot and mostly, with a few exceptions, don't like it. I don't force my opinions on anyone as fact. I DO counter your radical posts of opinion-as-fact because I feel the free world should be offered an opportunity to decide for themselves what's right in a situation where there is no overwhelming support for either side. I counter you so we can achieve balance and fairness. If I didn't, we might have a whole lot of readers who try cross-feed because of reading this thread, and both of them would wonder why they remain so spatially ignorant because they don't like it. How is that helping?
 
Jan 11, 2018 at 7:55 PM Post #584 of 2,146
Perhaps we should be asking you the question: "Is gregorio actually saying any of that?" Or do you need a lesson in reading comprehension?

Please don't dumb-down the discussion by putting words in peoples mouths that they've never said or implied. I know it's tempting because the thread is so dumb already, but resist...resist....resist.

This is what gregorio wrote:

(...)
2b. No, I am not saying acoustic virtual reality is a myth! I'm not sure where you've got that from? I am saying that because with popular music genres there is no "reality" to start with, then logically it's obviously impossible to emulate a reality which never existed. So, we cannot have a virtual reality of popular music, although we could in theory have a sort of "virtual non-reality" or "virtual surreality" but it's not clear how we could achieve even that in practice without musical compromises and avoiding it being no more than just a cheesy gimmick (as with some early stereo popular music mixes).

3. To be honest, your questions, conclusions and statements indicate that you have relatively little understanding of our work. We do not "add value" ... putting a chassis, wheels and suspension on a car does not "add value" to a car because without a chassis, wheels and suspension you don't have a car in the first place, just an incomplete pile of car parts! Engineering is an intrinsic part of the creation of all popular music genres, not an added value. For example ...
(...)
5. Clearly you are wrong and driven by myth as far music is concerned, even acoustic music genres, although to a lesser degree. You are also somewhat wrong and driven by myth as far as most commercial sound in general is concerned. What you've presented here is not "a layman guide to immersive sound" but an hypothesis of what theoretically might occur in the future but it's a distant "might" because apparently without realising it, you're not just talking about technicalities of sound reproduction but a huge change in the art underlying music, a change to something new, as yet undiscovered and at the cost of abandoning the art we currently have and have had. If we look back in history, we see that the change from mono to stereo occurred gradually but once there was a decent installed user base of stereo then the popular music genres evolved to take advantage of it, even to the point of becoming reliant on it. Then we got 5.1 about 25 years ago and have had a decent installed user base for about 15 years or so but beyond a relatively few experimental albums, we've seen none of the huge music genre evolution to take advantage of 5.1 which we saw with the change from mono to stereo. Now you're talking about another big evolutionary step beyond 5.1, while the music itself hasn't even evolved beyond stereo yet and, shows no signs of doing so!

G

So when I wrote about crosstalk cancellation filters, beaforming phased array of transducers and headphone externalization I wrote something that might theoretically occur and I was wrong.

But now he writes about acoustic music genres and the problem is mainly in visual cues?

So tell me, what is worst problem: “visual cues and other biases” “in the live performance” or acoustic crosstalk in the playback?

I am not trying to put words in his mouth. Sometimes the absurd argument is useful to express a mild idea.

What I am trying to say, respectfully, is that mixing without carefully considering ITD is a potential problem.

You say no because stereo acoustic crosstalk with speakers is ubiquitous, it happens in any “loudspeakers in a room” listening environment.

Fine, but you don’t have to rage at what I wrote.

Did I really dumb down the discussion?

I will refrain posting at all, then.

I have reading comprehension issues. And I am delusional.
 
Last edited:
Jan 11, 2018 at 9:04 PM Post #585 of 2,146
This is what gregorio wrote:



So when I wrote about crosstalk cancellation filters, beaforming phased array of transducers and headphone externalization I wrote something that might theoretically occur and I was wrong.

But now he writes about acoustic music genres and the problem is mainly in visual cues?

So tell me, what is worst problem: “visual cues and other biases” “in the live performance” or acoustic crosstalk in the playback?

I am not trying to put words in his mouth. Sometimes the absurd argument is useful to express a mild idea.

What I am trying to say, respectfully, is that mixing without carefully considering ITD is a potential problem.

You say no because stereo acoustic crosstalk with speakers is ubiquitous, it happens in any “loudspeakers in a room” listening environment.

Fine, but you don’t have to rage at what I wrote.

Did I really dumb down the discussion?

I will refrain posting at all, then.

I have reading comprehension issues. And I am delusional.
Somehow you missed his point and instead focused on the point that he correctly made regarding visual reinforcement of spatial hearing, but then took it out of his balanced context over to the ridiculous. Those "Are you saying..." questions were way out of context.

I'm not raging, I'm asking you to not blow things out of perportion or take minor points out of context. This is a challenged thread that needs no more confusion or interference.

I'm not saying you should refrain from posting. That's also a polar extreme. Just keep it real.
 

Users who are viewing this thread

Back
Top