1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.

    Dismiss Notice

Frontal sound AND correct frequency response with EQ only.

Discussion in 'Sound Science' started by abm0, Jun 23, 2017.
First
 
Back
1
3 4
Next
 
Last
  1. bigshot
    Sorry, I didn't define my term I guess... A secondary depth cue is one that is recorded into a song... reverb, room reflections in the recording venue, etc. Primary depth cues are ones that you hear directly in real life... location of the speakers, distance from the music being affected by the room acoustics of your listening room, perceiving location through moving your head, etc.

    A voice spans multiple octaves and consonant sounds can go even higher. An imbalance that made a vocal sound quieter and more distant would be so big, the drums wouldn't sound right, the guitar wouldn't sound right, nothing in the range of frequencies spanning the human voice would sound right. If it was just a part of the range that was affected, it would sound like the singer wasn't pronouncing consonants or the voice would sound thin or sound thick and muddy. When I turn the volume down on my stereo, it doesn't sound further away. It just sounds quieter. In order to sound distant it would need either secondary depth cues or actual physical primary ones. A balanced frequency response is a great thing. It makes the sound full and strong and sparkling all at the same time. It eliminates masking of frequencies and can reveal detail, but I really don't see how it affects soundstage, especially in headphones that really don't have any soundstage to speak of.
     
    Last edited: Jun 23, 2017
  2. abm0
    Well, in principle you need the original recording to be reproduced accurately before any effects are applied that attempt to turn the headphones' sound into "you're hearing this from frontal speakers". So well balanced headphones should produce better results with such effects/filters. (Griesinger claims the same in his Berlin presentation, where he says in-ears are better for this method because of their typically smoother FR.)
     
  3. Strangelove424
    Well, we get no spatial cues from amplitude. A small bang from short distance or a loud one from far away cannot be differentiated on amplitude alone. So turning down the volume on a stereo won't change any depth cue. Depth cues come, as you mentioned, from reverb that functions like echo location to give us an idea of environmental reflections/space, and L/R timing differeces which our brain has used for centuries to measure direction and distance (which is a big problem for headphones because the L/R image is permanently stuck to your head, so as you move, it follows you). The only other thing I can think of that might give a depth cue is doppler effect, but that would have to shift the pitch so bad that, as you mentioned, everything would sound odd. And it would have to keep shifting. There are going to be some small deviations in frequency response between every headphone though, and my hunch is that the soundstage differences often occur when people switch back and forth between headphones quickly, exaggerating slight differences of frequency response as a doppler shift. But give that person some time to get used to the slightly new frequency response, and I bet that depth cue goes away very quickly, because it was relative to begin with. The doppler effect requires frequencies to constantly be shifting.
     
    Last edited: Jun 23, 2017
  4. abm0
    Then again nobody here was talking about overall amplitude but relative amplitude between frequency bands ("forward/recessed mids", remember?).

    The importance of reverberation and inter-aural time difference cues may have been greatly exaggerated, see:

    http://www.cns.bu.edu/~shinn/resources/pdfs/2003/2003Aizu_Shinn.pdf
    The individual HRTF matters to spatial realism, and it matters a lot, you can't just set it aside and go on to talk about other cues that also contribute to the overall phenomenon as if they were the only changes that happen to the sound on its path from speaker diaphragm to eardrum.
     
    Last edited: Jun 23, 2017
  5. Strangelove424
    I can't give a wholly intelligent response to that because I don't see a refutation anywhere. I mentioned all the factors you did, but all you gave me in response was some vague insinuation about where emphasis might or might not belong.
     
    Last edited: Jun 23, 2017
  6. bigshot
    Maybe I have a non standard head, but I've never enjoyed listening to binaural recordings. They tend to pop from front to back on me all the time. I can't control it. I remember listening to a track that was a barber's electric razor moving back and forth. It would be in front of me for a while, then snap! it sounded like it was behind me. I struggled to try to control it and keep it in one place, but I couldn't.

    Interestingly, one of the best recordings of depth I ever heard was an Eddy Arnold and the Tennessee Plowboys 78 from the late 40s. They recorded it in a studio with a lot of really clear reflections. It was packed with secondary depth cues. You could tell where every instrument was in the room. It was mono too!

    I should see if I can find it on youtube
     
    Last edited: Jun 23, 2017
  7. abm0
    If that were true you could quote for me where you mentioned or even hinted that spectral changes (relative amplitude changes between the frequencies or frequency bands) due to the individual HRTF are important for positional hearing (or localization). Otherwise the above statement is exactly false: you left out the HRTF as if it didn't matter, whereas it might well be the dominant effect in positional hearing, trumping even head tracking (if Griesinger is right).

    I feel (some of) your pain. For me nothing was ever forward, either with binaural recordings or with any crossfeed effects, until I listened to some of the rooms in the Out Of Your Head 2-channel jazz demo (e.g. the third room, where a certain clapping instrument is very clearly forward and at a good distance too, but the rest of the music, sadly, stays in my head) and until I listened to some of my normal stereo music through a personal and headphone-specific EQ curve created with the method above (still imperfect, most of the music is still inside my head, but a few instruments here and there sound forward and a bit down; I have yet to try Griesinger's own binaural recordings, which he claims sound completely realistic for him on earbuds).
     
  8. Strangelove424
    That is a common problem. I've listened to binaural stuff too, and can fool myself into switching front to back if the sound is central. That's when I realized that my reference was based partly on video cues. If I close my eyes, I suddenly have to try really hard. I feel like most of the effect is just an exaggeration of L/R panning, and the binaural mics get most of effect from the distance between transducers, not the shape of the ears. I hear a lot of gamers using surround simulators complain about the same inability to localize front-rear. And I don't think it's limited to sound reproduction either. I once participated in a science experiment in high school where we blind folded an individual and then clapped directly in front and then behind their head, and nobody could guess with a greater than 50/50 accuracy where the sound originated from.

    I mentioned reflection, frequency response and timing, I just didn't describe it through the shifting anatomical perspective of HRTF. Reverb "may have been greatly exaggerated", HRTF "might well be the dominant effect". Great. Now how does that, or any HRTF model for that matter, explain to someone how to make one thing sound like something else? I know how to make something sound like it's in a large or small, empty or busy space by changing reverb parameters in a studio, or coming straight at you by changing pitch. How am I supposed to do that with HRTF, where's the applicable model for localization?
     
  9. abm0
    Sure you did. So the answer is "no" - you can't quote yourself saying anything about frequency response changes having a role in all this. That's what I thought. I think we're done here.
     
  10. Strangelove424
    No applicable model? I didn't think I'd get one. Yet you make the claim that HRTF is supplanting traditional localization theories like reverb and binaural timing. I mentioned frequency response/pitch changes causing a doppler effect. That has application, and correlation between physics and perception. You can mimic doppler in a studio by playing with a signal and you know exactly what you'll get, exactly what the correlations are. Same with reverb. I have yet to see anything of that sort for HRTF, and certainly won't be holding my breathe for it.
     
  11. castleofargh Contributor
    the brain uses whatever it can consistently rely on. HRTF being a result of the body is of course a big part of it.
    reverb is something the brain can deal with very well to the point where it can soon remove some of it fairly fast if it's consistent. but aside from telling us if we're getting close to a wall, or some really vague idea about the size of the original room, I'm not sure we rely on that for sound position. ILD, ITD and how our body impacts them at each frequency seems to be much more relevant to how we locate sounds. and that goes hand in hand with HRTF because we don't have a choice.
    anyway the video and the main idea if for the frontal center image. there is no claim about the rest. the speaker is at 0° in both axis, right in front of us, so there is no other panning than possibly hearing imbalance(and he let us set that). and no ITD to deal with. so adjusting the frequency response is at this specific position mostly like the HRTF data for that angle. so the idea isn't bad.
    the problem is that it says nothing about every other positions where again we may rely on some model for binaural, or some stereo involving speakers at 30° for the rest of the music.
    also because it's such a dead angle where we can't rely on most directional cues(unless we move our head!!!!!), it's real easy for the brain to fail a little on stuff like distance or altitude. in fact as we are supposed to see something when it's in front of us, not seeing it might tell the brain that it's up or down? aside from that, and all we have is the change in response from hitting the outer ear, and we're back to mainly a FR change and Mr Griesinger's idea.
    for the front/center image, I agree with him and my poorly done attempt as I said worked really well at moving the singer from my forehead and put him back at the right altitude. my own concern is about every other positions, but wrong for wrong while using headphones, at least the center is ok like that.

    do I make sense? ^_^
     
  12. abm0
    Yes, Griesinger also mentions this somewhere: in the absence of corresponding visuals frontality is supposedly binary ("near" vs. "far", no other gradations of distance). Reading this made me modulate my expectations and just try to judge the result on whether it seemed outside of the head or not, but without expecting amazing distances or depth precision ("OMG, its RIGHT, THERE!").

    And this I think should be seriously considered in the more general discussion: modern science rests a lot on separating things and testing them in isolation, so we see for example blind listening tests used to determine whether this or that audio technique "really works", but when you think about it the real-life experience is never split up like that, you always have many cues, including visuals, telling you "what you're hearing" (think of lip-reading in conditions of bad audibility or even deafness for an extreme example). So at some point we might have to conclude that full sonic realism using just headphones is simply not achievable, no matter the recording technique or the post-hoc sound processing algorithm. It could be a better conclusion than using blind tests to dismiss the value of this or that technique - audio processing algorithm X may very well be perfectly well founded in science and perfectly executed technically and just a genius idea for what it's supposed to do, but still not achieve Perfection simply because of how human perception works, how tightly we integrate the information coming from different sense organs in forming our final perception. (So in the future we might end up saying that the best music listening rooms always have to have a screen to present the actual artists on, otherwise the experience is always lacking. :relaxed: Or rather a VR headset if we're talking far enough into the future. :p )
     
    Last edited: Jun 24, 2017
  13. castleofargh Contributor
    well having speakers on my table funnily enough feels like it helps me getting more fooled by my crossover settings using the headphone. but of course I have set it all trying to place the sound where the speakers are in the first place, so chicken and the egg. but yeah, seeing the artist in front of me would certainly make the headphone experience more realistic ^_^.
     
  14. abm0
    The way I'm getting this with the "Griesinger tuning" is that when starting a recording and hearing something that sounds forward, central and lower than my head I sometimes get the sensation that my phone is vibrating as if the music was coming out of its speaker somehow, then 1-2 seconds later I re-convince myself that it's actually coming from the headphones, that there's no way it would sound that "large" coming directly from the phone. :) Or at the office a few times I got suddenly worried that I was bothering people around me by turning it up too loud, since it sounded out-of-my-head/environmental in so many ways, then I got the urge to take one earpiece off (this is with the Koss KSC75) or fiddle with the volume to re-convince myself that I wasn't really bothering anyone. :D And this is even without all the sounds appearing to come from a forward source - there's still a generalized "external" quality to the sound regardless.

    BTW, I've had a chance to listen to Griesinger's binaural recordings from Cologne last night, with my latest careful tuning (Tannoy Reveal 502 near-fields + Koss KSC75) and I can say this seems to be the type of recording that gives the most credible results with his method. (Even though I have lingering uncertainties because I never listen to classical music and certainly never go to classical concert halls, so I don't have the necessary experience to judge reproductions of such a setting.) Now listening to standard stereo recordings using the same EQ profile is suddenly less impressive than it was a day ago. :relaxed:
     
    Last edited: Jun 24, 2017
  15. bigshot
    I have very good speakers and a great room. They sound absolutely nothing like headphones, and they sound a lot better than setting some little dinky near field speakers on a table and hunching in next to them. This is more a test to figure out a way how to make speakers sound more like headphones than the other way around.
     
First
 
Back
1
3 4
Next
 
Last

Share This Page