1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.

    Dismiss Notice

To crossfeed or not to crossfeed? That is the question...

Discussion in 'Sound Science' started by jasonb, Oct 21, 2010.
First
 
Back
83 84 85 86 87 88 89 90 91 92
94 95 96 97 98 99 100 101 102 103
Next
 
Last
  1. Hifiearspeakers
    Great reply! And I concur with all of it. But I wasn’t actually calling for him to get flagged by a moderator, because I really don’t care. They’ve been at this for years so it’s just trite now. All I really wanted was to point out the hypocrisy of it all.
     
  2. castleofargh Contributor


    you need to extrapolate a little from what he says because he's talking in the context of binaural recording, but at least in my case, his point about frontal localization being set by EQ worked very well(he has another video on how he's doing it step by step but it's basically finding the EQ by doing an equal loudness curve of one speaker right in front of him and then the equal loudness contour of his headphone/IEM.
    I specify that it works great for me, because contrary to what he seems to suggest, it's not a sure thing. there is a non negligible portion of the population that will never get a proper(or even subjectively realistic) frontal localization for mono sounds on headphones. I'm not too sure about the cause, but it's been demonstrated in a few researches(only reason I even know about that). one of my guesses would be that those people's brains rely on sight even more than the average guy(who already uses sight as the dominant reference), so maybe if a sound source isn't visible, the brain for those guys will simply refuse the possibility that the sound source is somewhere in front.

    but for the rest of the population that can imagine a virtual sound source in front, then a mono signal has no localization cues except for elevation from torso, floor, and outer ear reflections. outer ear boost at a different frequency depending on the incoming vertical angle is logically the most important cue as it's the only one that follows head movements. that's a well known mechanism(even if I had no clue until maybe 2 years ago) and there are some funny videos with people who put plasticine or whatever to fill the outer ear and then have to guess where a sound is coming from with their eyes closed. it clearly isn't the only reason why people don't place mono right in front all the time, but it's probably the most common.
     
  3. 71 dB
    I don't get this claim at all. Zero interest in the facts? What? My disagreements with other people are not about ignoring facts. It's about what is objective and what is subjective. It's about terminology, semantics. If I have zero interest in facts when I talk about only ILD with crossfeed (while admitting other spatial parameters exists, but are in my opinion insignificant in this context) then audio engineers using amplitude panoration are AS DAMN IGNORANT as I am because amplitude panoration is ILD (and not even frequency dependent) and nothing more! Just as audio engineers admit amplitude panoration doesn't equal binaural panoration, I admit crossfeed doesn't make binaural recordings. I have learned here that spatial hearing is more subjective than I have previoustly thought and such learning is sign of interest in the facts! So, where does this zero interest in the facts come from? Name one fact (relevant to the topic) I am not interested about.
     
  4. ironmine
    Does anybody know when I can buy these microphones?

    [​IMG]
     
  5. castleofargh Contributor
    the perception of a position(real or virtual) is directly dependent on the listener = subjective
    when you use crossfeed, you should set it up for your own head = subjective
    when different people get to try some crossfeed system, they get different impressions, some find that convincing or preferable, some do not. in fact AFAIK a majority of people who try, do not = subjective
    most people prefer speaker playback, but some actually prefer headphone playback = subjective
    you decide that a certain impression of placement is an improvement based entirely on your own preferences = subjective
    you cherry pick the variables you consider most relevant in a speaker simulation, and dismiss variables that are definitely involved in speaker playback, all based on your personal opinion, convenience, and impressions = subjective
    you declare that some form of approximations in the EQ to apply as "ILD" is good enough, not based on a study, not based on trials, but based on how you feel about those approximations = subjective

    all that and more have been explained at length, many times, yet the next day you're back with some BS concept of improved spatiality from using crossfeed based on objective ideas, despite how the very concept of a perceived space is a subjective interpretation of sound and other senses.
    yes semantic is a problem, objective stuff are objective, a listener's ILD is something specific, not a one band filter. calling that ILD is a mistake, and of course panning is not ILD either, that's part of what gregorio has been saying all along. that you rely on references for albums that didn't care for them from the start. there was no correct spatiality, only subjective trickery that hopes to feel nice.
    all along you've been mistaking an objective approach with pseudo science IMO. when I look at the bird flapping its wings to take off, and I try to move my arms like it does, should that count as an objective approach? is it normal of me to declare that I've made an objective improvement toward flying? I'm of the opinion that an objective approach would also consider the other relevant variables related to a bird flying and not just stick 3 feathers on each arms and go "yet another clear objective improvement!". for each variable you pass for less important, or maybe important but without them we're still making progress, or they should be a certain way but you can't measure so you'll stick to the approximation that you consider good enough because you've never tried better, etc, you have no research to tell you the psychological impact of those decisions about missing or inaccurate variables. all you do is guess or rely on your personal impressions(which for the last time is not how objective approaches work). to answer those questions we'd need studies that probably will never exist because there is a lot to research and crossfeed is clearly outdated compared to what is explored today.
    so from there you have 2 options, stick to having 3 feathers on your arms and call it an objective improvement toward flight. in some disturbed way I can accept that point of view even if it won't give us a take off anytime soon and everybody knows it. or actually care about having conclusive evidence before making claims of objective anything or improved anything for everybody. and simply admit that you, and I, and at his own level, gregorio, don't have a definitive answers for any of this, beyond saying that it is not reality, it is not default headphone, it is not binaural, and it is not virtual speaker on headphones. so whatever conclusive fact we have on those models that we know very well(virtual speakers not so well yet), may or may not apply strictly to crossfeed playback. once we're there, how one feels is .... you've guessed it, subjective!
    then you can go around and if a majority of people start saying that crossfeed improved their experience subjectively, then we'll be able to conclude that crossfeed is a subjective improvement for headphone playback in general. but even that isn't in line with the facts as in practice only a minority of people stick with crossfeed and even among those, many only want it for specific tracks/albums. so objective or subjective, your claims of improvements should have been strictly limited to being your personal impression and opinion, instead of you getting mad when we didn't accept the claims of improvement as a fact.

    I never found that crossfeed was doing anything for me "spatially". if I switch it ON and OFF, sure but nobody does that all the time. in my case(maybe other's? IDK) after a few minutes of using it, the instruments are back to where they were without it in my mind(or very close, I couldn't tell the difference). to the point that I often don't know it crossfeed is ON or not after a while. such a great and obvious improvement that I need to check if it's ON... not a good sign. I loved crossfeed because over a long listening session(like long travel), I felt that it was less tiring for me/my ears. I know some people feel the same, and some don't. and we're back to subjectivity. did we really ever left? I don't think so.


    as to your request, I agree that saying you have zero interest in the facts is wrong. would you find it more accurate if I said that you're interested in the facts and then find excuses to ignore them anyway if they don't serve your crossfeed overlord?
    because once more, I've never seen you act that way for any other topics.
     
  6. gregorio
    1. I agree but we also have to be careful with any "extrapolation" because "the context of binaural recording" is different to the context of non-binaural commercial stereo recordings in TWO regards: Firstly (and obviously), binaural recordings record the sound which enters the ears (although in his case, the sound which actually impacts the ear drums) thereby including a HRTF, which obviously a standard stereo recording doesn't AND secondly, the sound being recorded is a single aural perspective of a single actual/real acoustic environment (a concert hall), this also is not the case with commercial standard stereo recordings, which either contain multiple different aural perspectives of a single real acoustic environment (in the case of classical and other acoustic genres) or multiple different aural perspectives of multiple different acoustic environments (in the case of popular/non-acoustic genres), neither of which can exist in reality. So, applying any sort of HRTF, even a theoretical, fully characterised HRTF to commercial standard stereo music recordings is still always going to rely on an individual's perceptual interpretation.

    2. Certainly, it is well known (and science has clearly demonstrated) that sight is a very significant/dominant sensory input used by the brain in the construction of the perception of sound/hearing. Obviously, with headphones or speakers we have a conflict of sensory input. Even ignoring the conflicting acoustic information in commercial music recordings, we would be hearing say a concert hall but seeing our sitting/listening room. For this reason, some people find it beneficial to close their eyes and eliminate this conflicting visual information. However, this is only beneficial (a perceived improvement) for some, for others, their brain still knows that they're in their sitting room even though they can't see it and this "knowledge" ingredient in the construction of perception is still enough (for some) to dominate or at least affect their perception. In other words, even if standard stereo recordings were a single aural perspective of a single/real acoustic environment and we applied a theoretically perfect set of HRTFs (for each individual), still it wouldn't work for everyone, although most likely it would work for more (nearly everyone).

    Ah, that's where we're going wrong, I've been naming several/many relevant facts you're not interested about, rather just one! :)

    G
     
  7. ironmine
    I try to use this website http://recherche.ircam.fr/equipes/salles/listen/index.html for making an individual crossfeed for myself.

    I listened to demo sounds and found that #1012 model gives me a very realistic sound: when the sound is supposed to be passing from left to right in front of me, I really hear that it passes in front of me. With other model heads, the sound at this moment tends to go up and then down.

    The description here is very confusing:
    "azimuth in degrees (3 digits, from 000 to 180 for source on your LEFT, and from 180 to 359 for source on your right)"

    Should it not be the other way around? It think there is a mistake in the description and it should read "From 000 to 180 is for a source on the RIGHT, and from 180 to 359 for a source on the LEFT."

    When I google "KEMAR + Head = Azimuth", I see pictures of this kind:

    [​IMG]
     
    Last edited: Nov 4, 2019
  8. 71 dB
    Subjective or not, I simply think headphone sound as it is is completely wrong and doesn't make sense, because it's spatiality for speakers, and speakers reproduce spatiality TOTALLY differently to headphones. From my perspective this FACT is ignored by others here. Subjective or not, I have hard time believing large ILD values at low frequencies can be natural to anyone. How is that possible? Our brain learns spatial cue based on what we hear in every day life, and large ILD values at low frequencies isn't something we hear a lot. Anyone can make binaural recordings mics in their ears and record sounds in their life and analyse the ITD. This should not be something I have to fight. I totally get that people are different, but how different can people be? It does make sense that someone has elephant ot cat hearing, because we are humans. We should have somewhat similar hearing. Our spatial hearing is based on learning the connection between the spatial cues and the visual information about the sound source. How can such process develop totally different spatial hearing for people? Makes no sense! This must be about personal prefences rather than science of spatial hearing: I stll believe crossfeed is a step toward spatial information that makes more sense scientifically (because headphone sound as it is often makes very little sense spatially), but people have their preferences and expectations which are not for everybody met using crossfeed.

    I have been saying many times after switching crossfeed ON, the sound image seems to narrow a bit (because spatial hearing reacts to the sudden change of ILD scaling), but after a minute it goes back. That's spatial hearing adapting. In fact I believe that's when spatial hearing is adapted to spatial cues that make sense while normal headphone listening means adaptation to spatial cues that don't make sense. To me the differense is not in the width, but in how natural the sound image sound. However, this is what crossfeed does for me.

    Good enough is one thing, improvement is another thing. Nothing is good enough. People want perfection and can never have it. I'm not after perfection. I am a realist. I'm happy about improvement, small or big. That's why I can enjoy the improvements crossfeed gives to my ears.

    In other topics I don't have the problem I have here. People who have studied digital audio for example share facts with me and it's a clear division between people who understand digital audio and those who don't. In this topic it seems different. Somehow crossfeed seems difficult to understand even for those who know a lot about spatial hearing. I look crossfeed from the angle of what it does and achieves while other people look at it from the angle of what it doesn't do or achieve. I believe this is because my opinion is that headphone spatiality is completely wrong and a mess so that almost anything is better than nothing. From my point of view people don't take serioustly enough the problem of headphone spatiality and even I was simply used to it as it is before having my "heureka" moment in 2012. Speakers in a room can't produce nonsensical spatial cues to listeners, but headphones can! Do we want nonsensical spatiality? If we want and it is artistic intention, then clearly speakers (without crosstalk canceling) are no good. If we don't want nonsensical spatial cues, then headphones are no good unless we use something that turns nonsensical spatial cues into something that makes sense. I think this reasoning is called for even if a lot of subjectivity is part of the equation.

    I am totally cool with crossfeed not meeting someone's personal preferences, but the way my reasoning and factual background has been questioned is unfair. Maybe there has been excuses on both side? I have excuses to "ignore" some facts that don't support crossfeed, but other people have excuses to ignore those facts that do support crossfeed.
     
  9. 71 dB
    I see it like this. Speaker spatiality = bird. Headphone spatiality (as it is) = injured bird that can't fly. Headphone spatiality (with crossfeed) = injured bird that has been taken care off and has a "fixed" wing so that it can fly somehow, not as well as it could before the injure, but can fly nevertheless.
     
  10. 71 dB
    I'm not interested? I remember asking your calculations about what crossfeed does to ITD, but I have seen nothing. If not calculations, how about some sort of explanation why ITD renders crossfeed useless? All I hear from you is that other parameters exist (yeah, I know. I have studied spatial hearing in the university), but no analyse about how these parameters affect crossfeed. I have given my take on the role of ITD in crossfeed:

    If speakers give 30° angle, headphones 40° angle without crossfeed and 25° angle with crossfeed because ITD values are scaled down a notch, ITD is hardly a problem. Acoustic crossfeed, ER and reverberation all affect ITD with speakers all of which is missing with headphones. Should be a not so ideal situation. However, I believe our spatial hearing is more tolerant to ITD "errors" because all kind of reflections shape ITD so our spatial hearing is used to ITD values that are a bit off while the laws of physics dictate what kind of ILD values are possible so that errors in ILD are in my opinion more serious. Crossfeed is ILD scaling and in my opinions that's justified for this reason. That's why concentrating on ILD is justified. It is not ignoring facts. It is concentration on relevant things. Not ignoring the other facts makes it possible for me to know they are somewhat irrelevant in this context.

    I am totally fine discussing the facts, but most of what I get is claims I am not interested of facts. Crazy.
     
  11. 71 dB
    Kemar is clockwise, ircam is counterclockwise.
     
    Last edited: Nov 4, 2019
    ironmine likes this.
  12. jaakkopasanen
    I have listed some in Impulcifer wiki: https://github.com/jaakkopasanen/Impulcifer/wiki/Measurements#microphones
    I'm using the soundprofessional ones for HRIR measurements but I'm starting to suspect that ear canal needs to be properly blocked for reproducible results. That should be possible with an earplug when using the soundprofessional mics which are not ear canal blocking themselves.
     
    ironmine likes this.
  13. sander99
    Just to let the interested readers here know I reacted to above post in another thread (about Impulcifer)
     
  14. gregorio
    No, most of what you get IS the facts but as you keep ignoring/dismissing them, that's OBVIOUSLY WHY you then keep being told you're ignoring the facts! This is obvious to everyone except you, so who's "crazy", you or everyone else? For example:
    1. You've had countless explanations of how other parameters affect perception, that your calculations and explanations are inapplicable because we're not dealing with real/natural spatiality in the first place and most recently a video posted by castleofargh but you ignore them all or ... 1a. You have "a take" that dismisses them! How many times?
    Nope, the bird still can't fly, you're imagining it! You're free to believe/imagine whatever facts you want, even if your beliefs are dictated by your agenda, and no one is saying you should stop using crossfeed if that's your preference but in this sub-forum you cannot make false statements of fact (no matter how strongly you believe them) without being refuted, and then strongly refuted if you keep repeating them! How many times?

    G
     
  15. 71 dB
    1. We are not dealing with real/natural spatiality? Kind agree. So there is no natural spatiality to be messed up with. If crossfeed makes sound appear more natural what's the problem? I watched the video It is quite Finnish study (Tapio Lokki and Ville Pulkki mentioned). So I am not ignoring. The video doesn't say crossfeed can't improve headphone audio. It deals with different things that what crossfeed does.

    2. I wasn't the one who brought birds into this! It's pointless to argue whether imaginary birds can fly. This is lunacy! People talk about birds to prove me wrong and when I try to defend myself this happens! **** with the BIRDS!!
     
    Last edited: Nov 4, 2019
First
 
Back
83 84 85 86 87 88 89 90 91 92
94 95 96 97 98 99 100 101 102 103
Next
 
Last

Share This Page