To crossfeed or not to crossfeed? That is the question...

Oct 13, 2022 at 12:47 PM Post #2,026 of 2,192
You're talking to a guy who's been manipulating sounds to match video scenes for a living, He doesn't need simplifications or to have you explain M/S(seriously, how hard did you misread his posts?). Any simplification you made has been exactly what got you in trouble from the get go. Because this has been a discussion about very specific variables and techniques between people who at the very least, learned the basics about them(counting myself in), things got elevated compared to the usual "let's talk an ignorant guy out of something false by simplifying reality until we hopeful can get down to his level" kind of discussions. Because of that, more accuracy is expected not less.
This new road you're taking about M/S looks like self arm to me. I cannot imagine this having any potential for a happy ending.

No matter what, crossfeed not being a HRTF or speaker simulator, Some variables are even more wrong/added/false/missing than with a simulation attempt. It's a fact, and everybody here accepts it. When we discuss facts in this forum, we're can never tell people that they don't feel or prefer what they feel and prefer. We argue that they misunderstood the cause of it. This applies here too.




For me the panning or placement of the instrument don't change, so your experience of 180° panning on crossfeed after a while is interesting.
I do get 60° initially, I imagine that my brain is already convinced it's a headphone and that a given song must have that instrument all the way to the side. Over time it progressively compensates until I get there. It the only explanation I can think about but maybe it's something else?

I got some interesting results using head tracking, be it the A16 or the Waves 3D crap with the tracker I got on kickstarter years ago(using some generic HRTF based on the size of the head, so most HRTF cues were still chaos for me but not as much as fixed dummy head chaos). I kept a stable panning over time because I moved enough to re-calibrate into the effect(like turning crossfeed on and off often will immediately make me feel 60° when it's ON). But if I rest my head on something and pay attention not to move, I still end up progressively spreading instruments to the sides and killing all distance for the center. Even with my custom impulses on the A16. Change is what keeps me in the dream.

The other extreme I talked about, was spending months using a laptop placed on the side with another keyboard and bigger screen in front of me. I would often listen to youtube videos directly through the integrated tweeters of the computer on the side. after months of doing that occasionally, I started feeling like the sound was coming from the screen. It never off balanced anything else, not headphones, not daily life, and not my actual speakers in the other room. Another sign that my eyes do most of the listening and that my brain knows what it's doing when it's messing with my senses.

I don't claim to be the common average guy, I usually am not and generic 3D solutions have always been very bad for me(that includes binaural recordings made with a dummy head).
From the Realiser thread I learned that I clearly am even more prioritizing vision to make sense of sounds than the average guy usually does. Probably why I noticed early on how much I could fool myself with eye candy and why I got interested in controlled tests.
 
Oct 13, 2022 at 1:44 PM Post #2,027 of 2,192
I think his perception is unable to compensate. If it's a little bit off, his subjective impression is focused on the error and he can't ignore it. It isn't something he can just tune out. This is why he's so insistent on this issue. Unfortunately, I suspect that hearing this way leads to a lot of parallel parking. While a normal person can get close to the sweet spot and it's fine because he naturally compensates the rest of the way, 71dB's target is very narrow, he has no way to compensate, and it may even wander, forcing him to constantly adjust to try to compensate. If crossfeed takes a little bit of a curse off this hyper focus, that is good for him though.

But this doesn't appear to be a matter of spatial cues or enhanced "realism". It's more a matter of psychoacoustics being used to help a perceptual problem. His problem is basing all of his theories on a test subject group of one.
 
Last edited:
Oct 13, 2022 at 5:20 PM Post #2,028 of 2,192
No matter what, crossfeed not being a HRTF or speaker simulator, Some variables are even more wrong/added/false/missing than with a simulation attempt.
I am not comparing crossfeed to HRTF based simulations when talking about benefits or improvement. I assume the alternative to crossfeed to be nothing (just headphone sound as it is). If it is crossfeed vs HRTF based simulations then of course the latter wins.
 
Oct 14, 2022 at 3:02 AM Post #2,029 of 2,192
With crossfeed I perceive a narrower soundstage, so more like the width of an orchestra from the ideal listening position but without the distance or the higher ratio of reverb. The bass also appears different but not more similar to a real life scenario and not as linearly/predictably as without crossfeed. Sometimes I get an EQ notch type effect in the bass, sometimes the bass sounds artificially louder, sometimes I get the bass component of a sound/instrument within the mix in a slightly different location to the higher freq components, which I find particularly annoying and doesn’t appear correlated with the crossfeed freq. I also get unpredictable effects with the location and FR of ERs/reverb. In general it’s more blurred, less spatially coherent, less stable and more unpredictable. It’s also still always all inside my head.
I don't get significant artifacts with bass. The phase difference of crossfeed is not large enough to create notch-type canceling at bass, and at higher frequences where the phase difference is large enough for that, the crossfeed level starts to drop so that out-of phase canceling remains weak. To me bass with large ILD sounds utterly unnatural and annoying. Any theoretical artefact introduced by crossfeed is insignificant in comparison, and similar artifacts also happen with acoustic crossfeed with speakers (and nobody cares). For me crossfeed bass isn't 100 % "real", but it is definitely much closer to real and without crossfeed. My "wide" crossfeed gives the most real-feeling bass, but the price is that the the sound lacks depth compared to default crossfeed.

Maybe it’s because I spent a lot of time actually sitting inside orchestras that I don’t mind that extreme width/separation and don’t find it unnatural.
Sounds being located left and right is not the problem for me. At bass ILD for sound coming at 90° angle is about 3 dB and ITD is about 640 µs.

Without crossfeed is far from ideal, it would be good to get it outside my head, get more depth, have more representative bass and not to have those occasional anomalies but even with all these failings, it’s still acceptable for me.
I have hard time accepting it, because speakers and headphones are so fundamentally different. Only if the music has been somehow mixed for both or it is only for headphones (binaural) does it work without crossfeed (in fact in these cases crossfeed only make things worse).

I try to mix my own music for both speakers and headphones because it is interesting to create such "versatile" spatiality. So, ironically I listen to my own music without crossfeed, but since so much of the music in the World is so badly headphone compatible, I have to use crossfeed.

With crossfeed, the narrower width without the greater distance is a conflict, as is the same sound in different locations, in addition to the less coherent reflections and other issues, it appears far less natural to me and is typically unacceptable. I can’t just sit and enjoy it, because I’m constantly trying to figure out what’s going on. I should mention there are exceptions and it’s often not as obviously “black and white” depending on the mix (which can vary wildly). I have encountered recordings that I did prefer with crossfeed but such exceptions are so rare, it’s not worth the effort
Even amongst those like me who are not fans of crossfeed, I don’t assume they are going to experience the same as me. Some of what I described maybe identical or similar for other non-fans but they might not perceive or be consciously aware of the other things I’ve described or even if they are aware, they might not be troubled by them and it’s very likely some non-fans experience yet other effects that I don’t.

G

Interesting, how differently we hear things. I wish I had known this before coming to this board, but I always learn things too late. I need to make mistakes in life in order to encounter the information that would have helped me prevent the mistake in the first place. Well now I know that all the theories about spatiality I have in my head work only for me. As we learn from out mistakes, I may not make this kind of mistakes again (I hope), but unfortunately there are many kinds of mistakes to make...
 
Oct 14, 2022 at 3:09 AM Post #2,030 of 2,192
I think for a lot of amps these days crossfeed is irrelevant because the crosstalk is so high. Probably 3/4 of the amps I have owned had significantly less channel separation than a professional SS amp. Some of the tube ones I joke are pseudo mono, seeming about as wide as a vinyl record. Its great if you want your whole mix squished together between 10:00 and 2:00 and don't mind the masking or phase canceling.

The industry seems to have this flavor covered to the degree that it is hard to find true stereo output.
I have not used these tube amps so I wouldn't know, but who know how much crosstalk there is. The gear I use to drive my headphones has good channel separation, so if I want to reduce channel separation I need crossfeed for that.
 
Oct 14, 2022 at 4:30 AM Post #2,031 of 2,192
Now you're responding to posts by Gregorio that you've already replied to?
 
Oct 14, 2022 at 7:42 AM Post #2,032 of 2,192
Now you're responding to posts by Gregorio that you've already replied to?
I am responding to them in parts, because it is a lot of work.
 
Oct 14, 2022 at 8:14 AM Post #2,033 of 2,192
I think his perception is unable to compensate.
Who's perception is able to compensate? Do you use the crappiest audio gear yourself and just let your perception to compensate for it to perceive awesome sound? To some extend my hearing can get used to things, but it is quite limited.

If it's a little bit off, his subjective impression is focused on the error and he can't ignore it. It isn't something he can just tune out.
I can'r ignore it now that I know how much better headphone sound can be. I would rather listen to stereo recordings with "superstereo" mixed to mono than without crossfeed.

This is why he's so insistent on this issue.
Crossfeed has been dear to me for a decade and I have invested a lot of time on it. It has been devastating for me to realize all I have is my personal increased enjoyment of headphone sound. That's all. I can't apply my understanding and knowledge to anyone else than myself! If crossfeed is not a topic I know something revevant about, then what relevant things do I know and understand? So, this has been my identity crises for the last 5 years...

Unfortunately, I suspect that hearing this way leads to a lot of parallel parking. While a normal person can get close to the sweet spot and it's fine because he naturally compensates the rest of the way, 71dB's target is very narrow, he has no way to compensate, and it may even wander, forcing him to constantly adjust to try to compensate.
My perception doesn't wander. If the proper crossfeed level for a recording was -6 dB years ago, it is the same today.

If crossfeed takes a little bit of a curse off this hyper focus, that is good for him though.
It is not much different from changing uncomfortable headphone to comfortable headphones for me.

But this doesn't appear to be a matter of spatial cues or enhanced "realism". It's more a matter of psychoacoustics being used to help a perceptual problem.
As I see it it is a matter of transforming the available spatial cues in the recording into more digestible and natural form.

His problem is basing all of his theories on a test subject group of one.
I have underestimated the subjective side of spatial hearing. My university studies of the topic made it seem very objective and never did my professor warn me people hear spatiality differently. Maybe he didn't know? It took me years of crossfeed hobby to hear from other reliable knowledgeable people that they hear spatiality differently than I do myself. The only difference I "knew" there is between spatial hearing is HRTF (everyone has their own), but such differences alone are not a problem for crossfeed, because crossfeed imitates HRTF so roughly. It is a very rough estimate of HRTF for everybody. Turns out there is a lot more than just the differences in HRTF between individuals...
 
Oct 14, 2022 at 8:16 AM Post #2,034 of 2,192
You are obsessed. Your posts are like kudzu. This isn’t normal, and it can’t end well.
 
Last edited:
Oct 14, 2022 at 8:19 AM Post #2,035 of 2,192
You're talking to a guy who's been manipulating sounds to match video scenes for a living, He doesn't need simplifications or to have you explain M/S (seriously, how hard did you misread his posts?).
Sorry if I have misread something. Gregorio responded to my post in a way that made me (to my surprise) feel he isn't familiar with the consept of M/S audio. Interestingly he hasn't confirmed being familiar with the concept.
 
Oct 14, 2022 at 8:21 AM Post #2,036 of 2,192
You are obsessed. This isn’t normal, and it can’t end well.
Crossfeed is dear to me. You can call it an obsession if you want. Aren't we all obsessed of something?
 
Oct 14, 2022 at 8:31 AM Post #2,037 of 2,192
I would often listen to youtube videos directly through the integrated tweeters of the computer on the side. after months of doing that occasionally, I started feeling like the sound was coming from the screen. It never off balanced anything else, not headphones, not daily life, and not my actual speakers in the other room.
I presume you’ve probably seen a fair bit of the research on sound localisation and HRTFs. For the benefit of others if you have, there as been some interesting research over the last 10 years or so, interesting from the point of view that some of it is conflicting.

It is typical when running scientific DBTs for thresholds of particular effects, say jitter or other distortions for example, to provide a period of training for the subject. Starting with say clearly audible amounts of jitter allows the subject to familiarise themselves with the specific sound (distortions) it creates which makes them more sensitive to the effect and lowers their threshold, often significantly, providing a threshold determination effectively encompassing a range of subjects wider than the limited sample size. It’s common that training is also used in localisation and HRTF testing but depending on the exact training this can really screw with the results. This has been researched in the last decade and the evidence suggests we have a very high plasticity to learning new locational perception. For example, someone else’s HRTF that is different from our own and doesn’t work at all, can become almost perfect after a couple of days of heavy training and visual feedback cues definitely help. Interestingly subjects did not appear to loose (replace) their own HRTF after this training, so they effectively had two HRTFs that worked for them and also seemed to retain this new/“wrong” HRTF even after a couple of months of not using it. This was with relatively simple localisation experiments, not complex multi-positional content such as music mixes but various studies indicate this plasticity. Of course we all have different listening experiences, have spent more or less time critically listening to speakers or headphones, or headphones with crossfeed or HRTFs. So, not only do we all have different HRTFs and therefore are all likely to have somewhat different perception of the same presentation but this difference is likely to be exacerbated in the case of presentations we like, which we therefore listen to more and become more “trained” (acclimatised) to.
Well now I know that all the theories about spatiality I have in my head work only for me.
Maybe at last you’re starting to get it? That’s been the problem all along, you’ve developed theories to explain your personal perception, theories that have holes/omissions. You’ve then defended your theories with the circular argument that you’re justified in omitting these factors because you don’t perceive them and/or they don’t negatively impact your perception. However, this is the sound science forum and in science you can’t argue for the validity of a theory by simply omitting/dismissing the factors which invalidate it, regardless of your personal perception. Nor can you use what is effectively an “appeal to authority” fallacy, by stating/assuming everyone who disagrees is just ignorant.

Incidentally, I obviously do not need a lesson in M/S basics. M/S provides some useful options when mastering but is also a dangerous tool. The Mid channel does not just contain the information in the middle of the L/R mix, so there are potential phase and spectral issues when processing either the M or S channels.

G
 
Last edited:
Oct 14, 2022 at 9:40 AM Post #2,038 of 2,192
I presume you’ve probably seen a fair bit of the research on sound localisation and HRTFs. For the benefit of others if you have, there as been some interesting research over the last 10 years or so, interesting from the point of view that some of it is conflicting.

It is typical when running scientific DBTs for thresholds of particular effects, say jitter or other distortions for example, to provide a period of training for the subject. Starting with say clearly audible amounts of jitter allows the subject to familiarise themselves with the specific sound (distortions) it creates which makes them more sensitive to the effect and lowers their threshold, often significantly, providing a threshold determination effectively encompassing a range of subjects wider than the limited sample size. It’s common that training is also used in localisation and HRTF testing but depending on the exact training this can really screw with the results. This has been researched in the last decade and the evidence suggests we have a very high plasticity to learning new locational perception. For example, someone else’s HRTF that is different from our own and doesn’t work at all, can become almost perfect after a couple of days of heavy training and visual feedback cues definitely help. Interestingly subjects did not appear to loose (replace) their own HRTF after this training, so they effectively had two HRTFs that worked for them and also seemed to retain this new/“wrong” HRTF even after a couple of months of not using it. This was with relatively simple localisation experiments, not complex multi-positional content such as music mixes but various studies indicate this plasticity. Of course we all have different listening experiences, have spent more or less time critically listening to speakers or headphones, or headphones with crossfeed or HRTFs. So, not only do we all have different HRTFs and therefore are all likely to have somewhat different perception of the same presentation but this difference is likely to be exacerbated in the case of presentations we like, which we therefore listen to more and become more “trained” (acclimatised) to.
Yes, I already had similar experiences with online games where at least when I was playing a lot, both audio and video were bad and we had to learn how to interpret cues almost from scratch. For sound I remember some panning and an abuse of Doppler effect for anything that moved, and not much else. But after some time, it became second nature to locate a baddy walking near me by ear, or identify a certain tone on that one pixel as a guy crouching far away. But in that case it was training for something that had nothing really correct/natural.
With my laptop on the side anecdote, from the start I had the right cues to locate it where it was. It's clearly a case of my brain deciding to follow my eyes over anything else that could move a perfectly fine sound source toward the screen in the long run. That's what amazed me. Because a quick look to the side and my brain should confirm that the sound source was where it sounded, but somehow that's not how it went. People were moving in the screen, the sounds connected with their actions, voila!
 
Oct 14, 2022 at 10:39 AM Post #2,039 of 2,192
Maybe at last you’re starting to get it?
Well, it has been a slow process that started when I came here. I have tried to convince myself that I know this stuff in order to uphold my self-confidence, but it seems I have to AGAIN admit defeat and failure in life. So, my self-esteem is AGAIN shattered. I don't know what to do. I am destined to live with low self-esteem.

That’s been the problem all along, you’ve developed theories to explain your personal perception, theories that have holes/omissions. You’ve then defended your theories with the circular argument that you’re justified in omitting these factors because you don’t perceive them and/or they don’t negatively impact your perception. However, this is the sound science forum and in science you can’t argue for the validity of a theory by simply omitting/dismissing the factors which invalidate it, regardless of your personal perception. Nor can you use what is effectively an “appeal to authority” fallacy, by stating/assuming everyone who disagrees is just ignorant.
My theories could have been correct. At least I had the iniative to make them. Most people don't think anything and never come up with any theories. I did nothing wrong. I was just unlucky to develop a faulty theory. I believed in myself. Everyone says "BELIEVE IN YOURSELF!". Well I DID!!! That is what it gave me! I don't understand life! How am I supposed to keep believing myself when I am always told to be wrong?

Incidentally, I obviously do not need a lesson in M/S basics. M/S provides some useful options when mastering but is also a dangerous tool. The Mid channel does not just contain the information in the middle of the L/R mix, so there are potential phase and spectral issues when processing either the M or S channels.

G
Of course M/S processing contains dangers and one has to know what he/she is doing when applying it. I talked about M and S channels as analytic tools. The purpose wasn't to change the sound, but to analyse the signal beyond L and R channels.
 
Last edited:
Oct 14, 2022 at 10:51 AM Post #2,040 of 2,192
For example, someone else’s HRTF that is different from our own and doesn’t work at all, can become almost perfect after a couple of days of heavy training and visual feedback cues definitely help. Interestingly subjects did not appear to loose (replace) their own HRTF after this training, so they effectively had two HRTFs that worked for them and also seemed to retain this new/“wrong” HRTF even after a couple of months of not using it.
Ah, interesting. This seems to relate to something I experienced with the Smyth Realiser and using my own personal measurements. In the beginning there occasionally seemed to be certain problem sounds or frequencies that sounded inside my head, while the rest was at the proper location(s). Frequently moving my head always fixed the problem. After a few days however, the problem was gone. All sound was and stayed at the proper location also when holding my head still. I think I trained my brain to somehow correct for the imperfections of the PRIR. In this case it seems the headtracking correctly adapting ITD - and other things - to my headmovements gives very strong clues to my brain overruling errors in hrtf.
 

Users who are viewing this thread

Back
Top