To crossfeed or not to crossfeed? That is the question...
Nov 4, 2019 at 3:12 PM Post #1,396 of 2,146
This has not been about crossfeed for a long time. This is feuding.
I go now to watch The X-Files Season 11. That's more pleasant than being here…
Keep you birds, Ferraris and muddy fields. Not interested.
 
Last edited:
Nov 4, 2019 at 5:14 PM Post #1,397 of 2,146
I see it like this. Speaker spatiality = bird. Headphone spatiality (as it is) = injured bird that can't fly. Headphone spatiality (with crossfeed) = injured bird that has been taken care off and has a "fixed" wing so that it can fly somehow, not as well as it could before the injure, but can fly nevertheless.

 
Nov 4, 2019 at 5:54 PM Post #1,398 of 2,146
The same could be said about your boy, Gregorio. Why does he keep responding and repeating himself over and over? In fact, he’s worse, because all he does now is stoop to repeated ad hominem attacks, which really should be moderated. Gander meet Goose.

He argues on point with facts. He defines his terms very carefully. He has experience in the field. He understands what he is talking about. He repeats himself because certain people refuse to acknowledge any fact that doesn't support their own argument. They repeat their error over and over as if they don't even hear or understand what he is saying and try to bluff their way through. Gregorio is human and gets frustrated. He reacts with harsh words. When I get frustrated, I give a couple of shots across the bow, then I just react with jokes because I've written the poster off completely. We all react differently. But the best way to get along is to listen to what people are saying to you and interact with them honestly. Gregorio has a wealth of information if you listen. If you don't listen you get what you get. I don't feel sorry for people who go down that road.
 
Nov 4, 2019 at 8:09 PM Post #1,399 of 2,146
Kemar is clockwise, ircam is counterclockwise.

If IRCAM is counterclockwise, then this file contains impulses that represent how the left and right ears hear the left speaker: IRC_1012_C_R0195_T030_P000.wav

and this file contains impulses that represent how the ears hear the right speaker:
IRC_1012_C_R0195_T330_P000.wav

because T030 and T330 mean 30 and 330 degree angle.

(P000 means elevation is 0).

However, when I process the sound with these impulses, the result is that the virtual speaker is perfectly located at 90 degrees to the left !!
 
Nov 4, 2019 at 9:14 PM Post #1,400 of 2,146
Yesterday I also tried this trick in VST-chainer: I inserted 112dB Redline Reverb and ran it parallel to the signal path.
(see the block circled it blue):

08db04c7ad109a68f629931d3b52dcb0.jpg


It takes a direct signal, reverberates it (your can control its amount either with the output knob in the Reverb plugin itself or, as I prefer to do it, with the BitShiftGain), and mixes it with the crossfed signal.

Immediately, I sensed an improvement in the resulting sound in the form of sound sources moving away from me (this is what I wanted to achieve!). Now the sound really resembles 112dB Redline Monitor, I am so excited. What if I can finally improve upon it? At least the initial comparisons sound quite promising.

Now, I am thinking that maybe I should have placed this Reverb block into the Treble Boost block pathway above it, to keep the schematics simpler... But doing so will brighten all the reverberations in addition to the main signal...

Experiments do continue, stay tuned.
 
Nov 4, 2019 at 9:16 PM Post #1,401 of 2,146
I try to use this website http://recherche.ircam.fr/equipes/salles/listen/index.html for making an individual crossfeed for myself.

I listened to demo sounds and found that #1012 model gives me a very realistic sound: when the sound is supposed to be passing from left to right in front of me, I really hear that it passes in front of me. With other model heads, the sound at this moment tends to go up and then down.

The description here is very confusing:
"azimuth in degrees (3 digits, from 000 to 180 for source on your LEFT, and from 180 to 359 for source on your right)"

Should it not be the other way around? It think there is a mistake in the description and it should read "From 000 to 180 is for a source on the RIGHT, and from 180 to 359 for a source on the LEFT."

When I google "KEMAR + Head = Azimuth", I see pictures of this kind:

Diagram-of-KEMAR-dummy-head-with-a-hearing-aid-on-the-right-ear-and-different-azimuth.png
I honestly don't remember as I rapidly started messing around with those impulses(including renaming them). but there were a few that had an obvious(audible and measurable imbalance) so it was relatively easy to refer to the circular demo to check that some perceived collapse on a side was on the same side in your own fancy crossfeed convolution with music.
ultimately as you've guessed, you want to focus on the frontal impression and spend some time confirming them with the extracted 30 and 330° or whatever you like best in that area used in a self made "circuit", or as a start, just applied in some so called "true stereo" convolver. the stuff that accept 4 channels impulses(the 2 stereo impulses) instead of just 2.
 
Nov 4, 2019 at 9:39 PM Post #1,402 of 2,146
I honestly don't remember as I rapidly started messing around with those impulses(including renaming them). but there were a few that had an obvious(audible and measurable imbalance) so it was relatively easy to refer to the circular demo to check that some perceived collapse on a side was on the same side in your own fancy crossfeed convolution with music.
ultimately as you've guessed, you want to focus on the frontal impression and spend some time confirming them with the extracted 30 and 330° or whatever you like best in that area used in a self made "circuit", or as a start, just applied in some so called "true stereo" convolver. the stuff that accept 4 channels impulses(the 2 stereo impulses) instead of just 2.

I don't really need (I guess) "true stereo convolvers", as instead I can simply use two simple stereo convolvers and route their inputs and outputs the way I need in a VST chainer.

I tried that and the result was weird. I got a perfect illusion that the sound was coming from the left only.
I even tried to split the two downloaded stereo impulses into four mono impulses (I had to use four convolvers), but the result was the same.

So, there is something wrong with the orientation of that azimuth dial, or the description of the impulses offered at the website.

I wrote to the contact person at that website, now I wait for his response..
 
Nov 4, 2019 at 10:34 PM Post #1,403 of 2,146
Subjective or not, I simply think headphone sound as it is is completely wrong and doesn't make sense, because it's spatiality for speakers, and speakers reproduce spatiality TOTALLY differently to headphones. From my perspective this FACT is ignored by others here. Subjective or not, I have hard time believing large ILD values at low frequencies can be natural to anyone. How is that possible? Our brain learns spatial cue based on what we hear in every day life, and large ILD values at low frequencies isn't something we hear a lot. Anyone can make binaural recordings mics in their ears and record sounds in their life and analyse the ITD. This should not be something I have to fight. I totally get that people are different, but how different can people be? It does make sense that someone has elephant ot cat hearing, because we are humans. We should have somewhat similar hearing. Our spatial hearing is based on learning the connection between the spatial cues and the visual information about the sound source. How can such process develop totally different spatial hearing for people? Makes no sense! This must be about personal prefences rather than science of spatial hearing: I stll believe crossfeed is a step toward spatial information that makes more sense scientifically (because headphone sound as it is often makes very little sense spatially), but people have their preferences and expectations which are not for everybody met using crossfeed.

I have been saying many times after switching crossfeed ON, the sound image seems to narrow a bit (because spatial hearing reacts to the sudden change of ILD scaling), but after a minute it goes back. That's spatial hearing adapting. In fact I believe that's when spatial hearing is adapted to spatial cues that make sense while normal headphone listening means adaptation to spatial cues that don't make sense. To me the differense is not in the width, but in how natural the sound image sound. However, this is what crossfeed does for me.

Good enough is one thing, improvement is another thing. Nothing is good enough. People want perfection and can never have it. I'm not after perfection. I am a realist. I'm happy about improvement, small or big. That's why I can enjoy the improvements crossfeed gives to my ears.

In other topics I don't have the problem I have here. People who have studied digital audio for example share facts with me and it's a clear division between people who understand digital audio and those who don't. In this topic it seems different. Somehow crossfeed seems difficult to understand even for those who know a lot about spatial hearing. I look crossfeed from the angle of what it does and achieves while other people look at it from the angle of what it doesn't do or achieve. I believe this is because my opinion is that headphone spatiality is completely wrong and a mess so that almost anything is better than nothing. From my point of view people don't take serioustly enough the problem of headphone spatiality and even I was simply used to it as it is before having my "heureka" moment in 2012. Speakers in a room can't produce nonsensical spatial cues to listeners, but headphones can! Do we want nonsensical spatiality? If we want and it is artistic intention, then clearly speakers (without crosstalk canceling) are no good. If we don't want nonsensical spatial cues, then headphones are no good unless we use something that turns nonsensical spatial cues into something that makes sense. I think this reasoning is called for even if a lot of subjectivity is part of the equation.

I am totally cool with crossfeed not meeting someone's personal preferences, but the way my reasoning and factual background has been questioned is unfair. Maybe there has been excuses on both side? I have excuses to "ignore" some facts that don't support crossfeed, but other people have excuses to ignore those facts that do support crossfeed.
that's clearly your perception of the system and also of the situation. can't say it is mine. I'm fresh out of analogies, masturbation was probably the more fitting one as it did involve some fair share of mental image and impressions. your initial view for speaker vs headphone is one I happen to share. because that's how I feel and because with most albums ever released done on and for speakers, speakers seem the logical reference of desirable playback. even then I wouldn't go as far as calling it correct because many rooms, many speakers and we rarely know the actual reference used. but I happen to agree on the decision to pick speaker playback as reference for whatever we want to get in the end.
and that's pretty much where I stop agreeing with with you because the model you keep explaining as your so called objective demonstration is not speaker playback. don't know how many times we have to say it, you just don't care about that "detail". a human head is going to move, a human is going to know he has a headphone on his head, those habits/expectations are not magically going away for your convenience. you assume that if you mix the channels maybe kind of like a listener would get on speakers with his head stuck in an anechoic chamber, then magically he's feel a more natural experience. but you do not know that!!!!!!!!!! you only assume it because that's your subjective experience. for starters, let's talk about the odds that your crossfeed settings will actually come close enough to what a listener would experience. have you seen the effective variations from listener to listener? can you claim to know that your changes are going to trigger the desire type of impression anyway, and not something else? let's assume that step turns out ok, then what? the guy will still be missing reverb and any tiny head movement will still reveal to his brain that it's all BS. so now instead of the possibly comfy experience of headphone playback(not because it's natural but because the listener may have been using them for decades and just got used to that different experience), the listener ends up with directly conflicting localization cues. one cue telling him it's over there in front, another cue telling him the source is clearly stuck on his head and the only position that agrees with head movement is on top or inside the head(depending on how we move). you see that as an improvement, but how do you know that for someone else it doesn't end up feeling even more artificial and unnatural than default headphone playback that doesn't bother at all with localization beyond "this is more on the left"? at every turn you make your own assumptions that the entire world will feel like you do, enjoy what you enjoy, and prefer what you prefer. but take any song, any food, and you'll always find people who do not agree with you and do not think as you do when experiencing them. and that's the problem made obvious on the subjective side, but it should have been just as obvious on the objective side the moment you took a complete multivariable systems working under a clear set of conditions, and started to cut a piece of it to make your own "objective" model where you removed head movement, room reverb, headphone signature, specific HRTF, etc. what remained was not speaker playback, what remained was your explanation of what crossfeed does and why. from a scientific approach you can't just take a system and cut out the pieces and variables until it's simple enough, then declare that made up model to have the qualities and behaviors of the original real system. no scientist would accept that unless you demonstrate to them that most results and conclusions do indeed apply to the made up model. something you have never done and as I said probably cannot do. instead what you did is try for yourself, feel that it was correct and decided that it was apparently conclusive for the rest of humanity.
be it objective or subjective, your views are the views of someone who looks to validate his idea, not someone who looks to test if they're correct. you look for what agrees with you when science would systematically look to disprove an idea and see how sturdy that idea really is. as a fully subjective tool, you can only validate crossfeed as something that happens to be nice to you, and nothing else. if you want to declare any more than that yo have to run trials or at least ask for opinions, but without controlled trials, you'll never know that they set their crossfeed correctly, so... not very meaningful unless even then they all declare that it's really good and a subjective improvement(which as we know doesn't happen very often).
and all this time, while I juggle from objective to subjective, I try to keep them somewhat apart, but of course in practice it's a giant pudding and our subjective interpretation is going to be the sum of X variables, objective and subjective, and yet again, unless you test those on many people under those specific conditions instead of declaring that you can just apply speaker playback knowledge, you'll really know nothing.

if you want validation for remembering how to handle a matrix, how to calculate a delay based on distance and speed of sound, how to get some notions of acoustic about how an obstacle will have a frequency dependent impact, then here I am to give you that. I struggled more on matrix than on anything else. mostly due to the teacher I had, as once he changed I got over it in a week somehow, but still all in all I struggled for almost 6 months, applying rules for reasons I did not understand(which for a guy who loved math was a torture). it then took me a little less than 10 years to completely forget all about them and most math I ever learned. nowadays I have to think for a sec about how to do a division by myself... so I'm sincerely jealous of anybody who still has enough math in him to go look up all the cool stuff I struggle with, as in audio there is an endless amount of things that interest me deeply but require more math than I remember. but if you're looking for someone to tell you how right you are about the clear benefits of crossfeed based on your "demonstration", then I'm not the guy, and I doubt anybody informed on the subject will ever do because you take enough liberties to declare what you say false. be it the method or your conclusions, both take way too many shortcuts.

and here is the thing, I think I've explained all that many times already. so while I'm still typing, I'm fairly sure that once more you won't care and that once more you'll soon be back explaining how your little toy model of acoustic works and calling a one band EQ ILD. you clearly have all the cards to understand the situation and see your own errors, but clearly we can't just see them for you. when you change some variables in a complex system, you can usually predict some of the consequences, but not necessarily all of them. when you go and just create a model for the workings of crossfeed, given the many differences compared to actual speaker playback, pretending that you can predict results based on speaker playback is so wrong that we shouldn't have the need to explain it in 25 different ways for over 2 years. I don't know what else to tell you.
some people enjoy vinyl playback despite how it objectively does everything wrong. some people enjoy super colored kind of grainy tube amps, some people enjoy crossfeed. all those people are happy with what they've got and that's really great. if they're happy, all is good. and among those guys you always have a handful who wants their preferences to be justified as factual superiority. and most of them pass as loonies because they keep on trying to defend something that can't be with reasons that seem to make sense only in their own heads. I'm sure every single one of those thinks he's fighting the good fight for his beloved technology, but the effective result is the opposite. it slowly becomes weird to be associated with them, even just through a personal preference. if you really care about crossfeed and wish to promote it, stop this nonsense of trying to make it be something it is not, and stop behaving like you're crossfeed itself and it's a cult thing. that's my sincere advice to you. unlike gregorio who fights for facts until he cannot, I'm still coming back and posting that crap because somewhere I still want to have a rational conversation with you, and I still hope it's possible. you've proved to me that it was on many other topics, and you've strongly worked on demonstrating to me that it wasn't on the crossfeed topic. only you can figure out why you're so amazingly different and utterly biased about this. remember, I'm actually a guy who likes crossfeed. that should put things in perspective a little when even I cannot get behind what you say.
 
Nov 5, 2019 at 3:00 AM Post #1,404 of 2,146
1. and that's pretty much where I stop agreeing with with you because the model you keep explaining as your so called objective demonstration is not speaker playback.

2. don't know how many times we have to say it, you just don't care about that "detail".

3. a human head is going to move, a human is going to know he has a headphone on his head, those habits/expectations are not magically going away for your convenience.

4. you assume that if you mix the channels maybe kind of like a listener would get on speakers with his head stuck in an anechoic chamber, then magically he's feel a more natural experience. but you do not know that!!!!!!!!!! you only assume it because that's your subjective experience.

5. for starters, let's talk about the odds that your crossfeed settings will actually come close enough to what a listener would experience. have you seen the effective variations from listener to listener? can you claim to know that your changes are going to trigger the desire type of impression anyway, and not something else? let's assume that step turns out ok, then what?

6. the guy will still be missing reverb and any tiny head movement will still reveal to his brain that it's all BS.

7. so now instead of the possibly comfy experience of headphone playback(not because it's natural but because the listener may have been using them for decades and just got used to that different experience), the listener ends up with directly conflicting localization cues. one cue telling him it's over there in front, another cue telling him the source is clearly stuck on his head and the only position that agrees with head movement is on top or inside the head(depending on how we move).

8. you see that as an improvement, but how do you know that for someone else it doesn't end up feeling even more artificial and unnatural than default headphone playback that doesn't bother at all with localization beyond "this is more on the left"? at every turn you make your own assumptions that the entire world will feel like you do, enjoy what you enjoy, and prefer what you prefer. but take any song, any food, and you'll always find people who do not agree with you and do not think as you do when experiencing them. and that's the problem made obvious on the subjective side, but it should have been just as obvious on the objective side the moment you took a complete multivariable systems working under a clear set of conditions, and started to cut a piece of it to make your own "objective" model where you removed head movement, room reverb, headphone signature, specific HRTF, etc. what remained was not speaker playback, what remained was your explanation of what crossfeed does and why. from a scientific approach you can't just take a system and cut out the pieces and variables until it's simple enough, then declare that made up model to have the qualities and behaviors of the original real system. no scientist would accept that unless you demonstrate to them that most results and conclusions do indeed apply to the made up model. something you have never done and as I said probably cannot do. instead what you did is try for yourself, feel that it was correct and decided that it was apparently conclusive for the rest of humanity.
1. Yep. It's not obviously speaker playback. It is crossfed headphone playback.
2. Wrong. I do "care" about that detail. That doesn't mean I'm gonna suffer excessive ILD if I can fix it.
3. Yep. I know I am wearing headphones, crossfeed or not.
4. If headphone sound is a lightyear from speakers, crossfeed take me to maybe 0.8 lightyears distance. Still far away, but those 0.2 lightyears were for me the crucial part, the excessive unnatural spatiality that I find annoying. If I want to go all the way then I simply listen to my speakers! However, I like the 0.8 lightyears distance. Since I have used crossfeed now about 7.5 years, I am pretty sure I know how my spatial hearing reacts to the changes crossfeed does to the sound. If your ears react differently then that's not my problem, is it?
5. Why is there some random limits for "coming close enough"? Who says what is close enough? Having speaker-like sound on headphones is a huge technical challenge, but we can have better headphone sound, sound that is clearly headphone sound far from speaker sound, but better in some way, for example free of excessive ILD. We reduce the harm that comes from the differences of headphone and speaker sound. That difference creating for example excessive ILD, but we can fix that with crossfeed so that's what I have been doing ever since I realized the existence of the problem. Of course if YOUR ears think excessive ILD is not a problem then don't use crossfeed!
6. Yeah, and without crossfeed he/she is missing those things as well + dealing potentially with the problem of excessive ILD. How is that any better? Headphone sound is (spatially) BS. That's why I crossfeed it into the kind of BS I actually enjoy.
7. Somehow my mind was flexible enough to make sense of this and the result is miniature soundstage (given the spatial information of the recording allows it).
8. Apparently I don't know. That's why I have backpedaled and now I only say what I hear, because that is what I know. However, I will continue to use science to justify what I hear, because the science made me discover crossfeed (some sort of logical connection exists), and for me crossfeed does what the science say it would do, scale ILD to natural levels. Never did I expect getting anywhere near speaker sound, but somehow you think I am that dumb. It is ridiculous to think crossfeed would turn headphones into speakers and you laid out the reasons. Crossfeed scales ILD and in my case that allows my spatial hearing make more sense of the spatiality and enjoy more not suffering from excessive ILD. Is this a misunderstanding because I have said crossfeed makes the sound speaker-like? Of course I didn't mean identical! A notch toward speakers because ILD is more similar (still quite different, but without crossfeed 10 times more different at low frequencies!). Of course crossfeed can't do all the other stuff of speaker sound, room acoustics and all, but it can mimick acoustic crossfeed.
 
Nov 5, 2019 at 3:08 AM Post #1,405 of 2,146
[1] We are not dealing with real/natural spatiality? Kind agree. So there is no natural spatiality to be messed up with. If crossfeed makes sound appear more natural what's the problem?
[1a] I watched the video It is quite Finnish study (Tapio Lokki and Ville Pulkki mentioned). So I am not ignoring. The video doesn't say crossfeed can't improve headphone audio. It deals with different things that what crossfeed does.
[2] I wasn't the one who brought birds into this! It's pointless to argue whether imaginary birds can fly. This is lunacy! People talk about birds to prove me wrong and when I try to defend myself this happens! **** with the BIRDS!! ... {from the next post] Keep you birds, Ferraris and muddy fields. Not interested.
[3] This has not been about crossfeed for a long time. This is feuding.

1. The problem is that crossfeed is NOT some magical process that actually turns the "unnatural spatiality" into "natural spatiality", it obviously just crossfeeds the "unnatural spatiality". Unnatural spatiality + crossfeed = crossfed unnatural spatiality, it does NOT equal "natural spatiality". However, to your personal perception, crossfeed seems to make this crossfed unnatural spatiality "appear" more natural, which is fine but of course we're now talking about the "appearance" of spatiality to your personal perception, NOT objective fact! You seem to agree that crossfeed isn't some magical process and that we all perceive sound/spatiality somewhat differently but then effectively ignore/dismiss this and go on about natural/objective ILD, which by itself does not define spatiality anyway, so then you also have to ignore/dismiss all the other parameters that actually define spatiality. If all that's not bad enough, you then (falsely) state you're not ignoring/dismissing anything, this is indeed "lunacy"!
1a. Again, another of your classic self-contradictions! The video does NOT "deal with different things than what crossfeed does", in large part it deals with what objectively occurs (the FR) at the ear drums, so unless crossfeed is not crossfeed but is instead some magical process that bypasses the ear drums, then the video (at least in part) deals with the same things! So, you've watched the video and on the basis of your false conclusion that it has nothing to do with crossfeed, you ignore/dismiss it and then you falsely state that you are not ignoring it? This is indeed "lunacy"!

2. As you clearly fail to understand the RELEVANT facts/evidence and on that basis keep ignoring/dismissing them (despite them being explained to you numerous times in different ways), then using analogies is a logical way of simplifying and illustrating the facts but you state you're "Not interested", which of course is up to you but then of course you can't ask the question: "What facts am I not interested in?", because that is "lunacy"!

3. In a sense, it IS feuding. You making false assertions of objective fact and me/us refuting them. So, that leaves only 3 options going forward:
A. Me/Us also ignoring/dismissing the relevant facts/evidence/science, on the basis of your personal perception and desire to be a messiah.
B. You ceasing to make false assertions of objective fact. or
C. You continuing to make false assertions and me/us continuing to refute them.

"A" is never going to happen here, or else it ceases to be the Sound Science sub-forum.
"B" is apparently never going to happen because you don't believe you're making false assertions and won't stop posting them because you think you're an enlightened messiah and must enlighten the rest of us.
Which leaves only "C", the endless "feuding" of false assertions vs refutations. But why then are you complaining about this "feuding", when it's you who's causing it and you who can end it? So round and round we go and indeed "This is lunacy"!

G
 
Nov 5, 2019 at 3:13 AM Post #1,406 of 2,146
Yesterday I also tried this trick in VST-chainer: I inserted 112dB Redline Reverb and ran it parallel to the signal path.
(see the block circled it blue):

08db04c7ad109a68f629931d3b52dcb0.jpg


It takes a direct signal, reverberates it (your can control its amount either with the output knob in the Reverb plugin itself or, as I prefer to do it, with the BitShiftGain), and mixes it with the crossfed signal.

Immediately, I sensed an improvement in the resulting sound in the form of sound sources moving away from me (this is what I wanted to achieve!). Now the sound really resembles 112dB Redline Monitor, I am so excited. What if I can finally improve upon it? At least the initial comparisons sound quite promising.

Now, I am thinking that maybe I should have placed this Reverb block into the Treble Boost block pathway above it, to keep the schematics simpler... But doing so will brighten all the reverberations in addition to the main signal...

Experiments do continue, stay tuned.

Reverberation indeed is a strong spatial cue of distance. This starts to be "beyond crossfeed", but it's good you like it!
 
Nov 5, 2019 at 3:36 AM Post #1,407 of 2,146
1. The problem is that crossfeed is NOT some magical process that actually turns the "unnatural spatiality" into "natural spatiality", it obviously just crossfeeds the "unnatural spatiality". Unnatural spatiality + crossfeed = crossfed unnatural spatiality, it does NOT equal "natural spatiality". However, to your personal perception, crossfeed seems to make this crossfed unnatural spatiality "appear" more natural, which is fine but of course we're now talking about the "appearance" of spatiality to your personal perception, NOT objective fact! You seem to agree that crossfeed isn't some magical process and that we all perceive sound/spatiality somewhat differently but then effectively ignore/dismiss this and go on about natural/objective ILD, which by itself does not define spatiality anyway, so then you also have to ignore/dismiss all the other parameters that actually define spatiality. If all that's not bad enough, you then (falsely) state you're not ignoring/dismissing anything, this is indeed "lunacy"!
G

1. If the facts mattered, I should hate headphone sound, crossfeed or not. It's FACTUALLY unnatural. It is a scientific fact that we don't hear like mics. We have perception and in the end that matters. So, if my perception says crossfed unnatural spatiality appears natural that's how it is for me. Even with speakers stereo sound is based on perception and fooling spatial hearing rather than physical facts. Perception has to be taken into account.

ILD itself doesn't define (at least well) spatiality, but it doesn't have to. The other parameters don't disappear anywhere in crossfeed. I believe and my spatial hearing agrees that the combinations of spatial parameters seem more natural after crossfeed.
 
Nov 5, 2019 at 3:52 AM Post #1,408 of 2,146
1a. Again, another of your classic self-contradictions! The video does NOT "deal with different things than what crossfeed does", in large part it deals with what objectively occurs (the FR) at the ear drums, so unless crossfeed is not crossfeed but is instead some magical process that bypasses the ear drums, then the video (at least in part) deals with the same things! So, you've watched the video and on the basis of your false conclusion that it has nothing to do with crossfeed, you ignore/dismiss it and then you falsely state that you are not ignoring it? This is indeed "lunacy"!

G

So, nobody thinks eardrums with headphones, but if you use crossfeed, suddently ear drums are interesting? What? Ear canal resonances and ear drums are the same problem, whether you use crossfeed or not. I don't understand you reasoning of problems emerging only when you use crossfeed. They are there, crossfeed or not! That video has hardly anything to do with crossfeed. It is about creating "sonic ultrarealism", not just scaling ILD. The stuff in the video is like 1000 times more sophisticated than default crossfeed. It's like watching a video of Ferrari F1 cars and trying to use that to debunk someones claims about pedal cars.
 
Nov 5, 2019 at 6:35 AM Post #1,410 of 2,146

I kind of dislike her for no reason, but what she mentions does exist, and it's even worst because plenty of other effects piggyback on those general behaviors(I've read the books by the dudes who made the studies, dreaming that it would change my own behavior(of course it didn't), so I'm 12% expert myself now).
but here is the catch, sometimes we're involved with facts, and they're either correct or they're not. having 2 sides arguing them doesn't change the facts themselves(that would really suck). like when crossfeed is such an obvious improvement according to someone, but in practice, only a minority of people stick to crossfeed after trying it. you know the way most people behave when something is an obvious improvement for them. ^_^

but you're right, I had already given up(twice I believe...), and should have stuck to that.
 

Users who are viewing this thread

Back
Top