Frontal sound AND correct frequency response with EQ only.
Jun 23, 2017 at 8:25 AM Thread Starter Post #1 of 55

abm0

100+ Head-Fier
Joined
Mar 17, 2016
Posts
330
Likes
85
To reiterate something I mentioned in another subforum (because really it deserves its own thread here):

Dr. David Griesinger claims to have a method to achieve frontal localization and FR correction by using equal loudness EQ-ing (based on 1/3 octave noise bands) of a calibrated speaker set up in front of you, and then of your chosen headphones, with the difference between the two resulting EQ curves to be used for correcting your headphones' response in the end:


More details + presentations explaining the principles behind this are available in the first two 2017 updates on his website.


I've tried the method myself, I still have to work on it because I'm having trouble comparing the loudness of spectrally distant frequency bands, but I can say I've heard more frontal placement of virtual sound sources than with any (freeware/demo) crossfeed or binaural recording I've ever tried before. This is with my Koss KSC75 as the targeted headphones - I started with on-ears because Griesinger states at one point that this works better with in-ears and on-ears than with over-ears.
 
Last edited:
Jun 23, 2017 at 9:32 AM Post #2 of 55
for guys like me with the center image that's almost never horizontal on their headphones/IEMs(usually somewhere on my forehead or even above), it is a very effective method to solve that specific issue. that much I have tested and can confirm. the matter is more about deciding that this is the direction our headphone should be calibrated with.
and then comes the questions about the reference. I tried without any correction on my speaker because I didn't know how to include it(I should buy a minidsp or something like that some day) ^_^, and it still went very well as far a center position is concerned on my hd650. but I can't really say that I was in love with the signature. so I'll have to put some more time on this and see if it's a calibration matter(speakers, or simply that I suck at matching frequencies), or about the binaural material I use(after all some use a dummy head, some don't, so that's one more FR compensation to think about). or if it's simply not what I like(I do have old habits when it comes to headphone signature).

but it's a great yet simple idea for that center issue and I hate myself for not having it first a long time ago :)
 
Jun 23, 2017 at 10:30 AM Post #3 of 55
I tried without any correction on my speaker because I didn't know how to include it
Include it in the final response? My understanding is that if you define it on phone=6 and with eq_only=1 it will be automatically applied to the test tones used when tuning phone=0 for equal loudness and also in generating the final response.

I'll have to put some more time on this and see if it's a calibration matter(speakers, or simply that I suck at matching frequencies)
Well, the idea seems to be that - aside from reproducing the HRTF-dependent qualities of the reference sound - what the final compensation does for you is it removes your headphones' timbre (FR imperfections) and replaces it with the timbre of your speakers+room. If the latter is satisfying to you, the final corrected sound through the headphones should be as well, ideally. Speaker calibration I think is more important for the HRTF/positional correction than for the timbral correction - we adapt to and can learn to enjoy a variety of timbres, it's the HRTF-dependent positional stuff that's really problematic, it's probably that one that requires your speaker+room response to not have excessive hills and valleys.

or about the binaural material I use(after all some use a dummy head, some don't
About that: I misunderstood the method and the claims initially. On thinking about it more I realized that this method incorporates in the final correction all of the (symmetric) spectral effects of listening to a speaker in a room, everything that happens to the FR in that case but is missing with headphones, so it includes a lot of the sonic changes due to your head and ears (except asymmetric phenomena) and therefore is a sort of "binauralizing" effect. As such, its primary target should be regular stereo recordings, not standard binaural recordings (the ones created with a dummy head and reproducing all of that head's effects). The reason Griesinger recommends trying it out with his own binaural recordings from one of the pptx documents is that he has his own special recording setup, where he eliminates some of the effects of the dummy head in order to achieve a "universal" sound, and this is the one he says works beautifully in combination with a proper personalized compensation curve derived per the equal-loudness method above. (I got this from reading his latest presentation, the Berlin one.)
 
Last edited:
Jun 23, 2017 at 11:46 AM Post #4 of 55
Oh, it's Griesinger. I have a lot of deep running disagreements with most of his work. Maybe I would take to this one better if I read the actual paper, but I don't have a copy and I don't quite agree with what is on the slides.
 
Jun 23, 2017 at 1:01 PM Post #5 of 55
Does he say how he arrives at the perfectly calibrated speaker system? I guess that's easier with near field speakers, but near field speakers are more like headphones than they are a typical loudspeaker setup. I don't know why one wouldn't just use tone sweeps in the headphones themselves to arrive at an equal loudness EQ, rather than calibrating speakers so you can calibrate your cans. I'll watch the video when I get a chance.
 
Jun 23, 2017 at 1:25 PM Post #6 of 55
My understanding is that if you define it on phone=6 and with eq_only=1 it will be automatically applied to the test tones used when tuning phone=0 for equal loudness and also in generating the final response.
Oops. Actually, it gets in the final response simply through the way it changes the decisions you make when EQL tuning your speakers while they have it applied. It would actually make no sense to directly apply your speaker calibration EQ curve to your headphones. :p

I don't know why one wouldn't just use tone sweeps in the headphones themselves to arrive at an equal loudness EQ, rather than calibrating speakers so you can calibrate your cans.
Ah, but the point here is not to calibrate your cans (for an equal loudness tuning, which sounds dull and has zero "binaurality" in it unless everything's already done in the recording you'll be listening to). You first calibrate your speakers so they're as good a reference as you can possibly have (flat response as measured with instruments). Then you create an equal-loudness tuning for the speakers based on how your ears perceive loudness. Then you do the same for the headphones, and in the end what you apply to the headphones is the EQL tuning for the headphones minus the EQL tuning for the speakers. If you think about what influences go into each of these tunings you will realize that the final correction: 1. removes your headphones' FR imperfections, 2. adds the speaker+room FR particularities (insofar as those differ from what you call perceptually-flat), 3. adds all the other (symmetric) frequency effects of listening binaurally, with your personal ears, head and shoulders, to a small-ish source placed in front of you. This should work perfectly when listening to stereo recordings that reproduce only the "leftover" binaural cues that the EQ method can't capture, and it should also work pretty well with simple stereo recordings.

near field speakers are more like headphones than they are a typical loudspeaker setup
Yes, this is one thing that bothers me too: it seems like if everything goes perfectly his method would tend to move the sound clearly out in front of me - which is good - but also scrunch up the soundstage to the size of a small speaker, with zero stereo separation - which would be a disaster. I think this is because the speaker tuning is made equal on both sides rather than being corrected for ear imbalances like the headphone one. The actual experience I wish I could replicate would be that of listening to stereo speakers placed at +30 and -30 degrees in front of me, so a wide soundstage of 60 degrees. But I think it gets much more complicated if such a thing is to be achieved. I suspect I might need to create EQL tunings separately for each ear listening to each speaker alone (with an earplug in the other ear), and use the opposite-side tuning curves to add a crossfeed contribution on top of Griesinger's basic method. But I just came up with this today, it may need more mulling over.
 
Last edited:
Jun 23, 2017 at 1:34 PM Post #7 of 55
Honestly, I've never heard a single pair of cans that didn't run the music in a straight line straight through the middle of my head. I've never heard any distance in headphones that wasn't baked into the mix of the music by means of secondary cues like reverb. I keep hearing people talk about soundstage in headphones, but as a loudspeaker person myself, I can only assume that headphone people really have no idea what soundstage actually sounds like. It's distance from the listener to the speakers, meshing of the phantom center in space between each speaker and height. Headphones are incapable of doing any of that. Honestly, I can't see how frequency response would affect any of that since you don't have any natural room reflections or the ability to turn your head to determine distance. Soundstage is all about space and distance, and headphones are clamped over your ears. If you sit close enough to near field speakers, you can get them to sound pretty much like headphones because you're sitting so close you are eliminating any soundstage. That isn't a good way to improve soundstage in headphones.
 
Last edited:
Jun 23, 2017 at 1:47 PM Post #8 of 55
Honestly, I've never heard a single pair of cans that didn't run the music in a straight line straight through the middle of my head. [...] I keep hearing people talk about soundstage in headphones, but as a loudspeaker person myself, I can only assume that headphone people really have no idea what soundstage actually sounds like.
:laughing: I know what you mean, but I'm also entertaining the hypothesis that some of them do know what soundstage is but they're just used to the way it's talked about in headphone discussions and they just go with it, under the tacit assumption that headphones are always to be compared only to other headphones. :relaxed:
 
Jun 23, 2017 at 1:51 PM Post #9 of 55
Could soundstage in headphones be entirely based on secondary depth cues (reverb, room reflections, phase) baked into the mix and any difference people discern between headphones be attributed entirely to bias or placebo effect? That's my theory.
 
Jun 23, 2017 at 2:00 PM Post #10 of 55
No, that would be saying too much. There are pure and simple FR differences that can account for perceptions people describe in terms of "soundstage shape", e.g. placement of singers relative to the listener (or "stage depth") judged solely on how loud the vocal mids are. But I would agree that some percentage of "soundstage" reviews from less experienced listeners are probably due to simply listening to different recordings through the headsets being "compared". :D
 
Last edited:
Jun 23, 2017 at 2:07 PM Post #11 of 55
Vocals are usually mixed mono right in the middle. Unless one cup of the headphones has a different response curve than the other, it would be very hard to mess that up. It still doesn't at all indicate any sort of depth though. With headphones that depends on how well the secondary depth cues are reproduced. That might have something to do with masking from frequency spikes I suppose, but it would only apply to one individual recording and the specific types of secondary depth cues it contains. It wouldn't apply to any other recording the way speaker soundstage does.

I think the term soundstage is used more in terms of how closed or open the headphones sound. It has nothing at all to do with depth. Whenever I see someone discussing depth of soundstage or sound in front of the head with headphones, I think they're imagining it.
 
Last edited:
Jun 23, 2017 at 2:39 PM Post #12 of 55
Vocals are usually mixed mono right in the middle. Unless one cup of the headphones has a different response curve than the other, it would be very hard to mess that up. It still doesn't at all indicate any sort of depth though.
I meant that if the voice frequencies are noticeably quieter than the rest one could get the impression the singer is "somewhere farther back" and could find themselves describing that as "depth" in some forum post. :relaxed:
Anyway, soundstage as a feature of headphones themselves is not what this topic is about, it's about a method of altering the sound before it reaches the headphones so that it reproduces some sonic characteristics of a real soundstage (primarily the central and forward out-of-the-head positioning of the source).
 
Last edited:
Jun 23, 2017 at 2:47 PM Post #13 of 55
I'm talking about out of the head sound. I've never heard it where it wasn't just a subjective impression based on secondary depth cues. I don't see how a frequency imbalance could affect the placement of a vocal. It would have to be an imbalance that spanned several octaves and was off by more than 3dB. That isn't soundstage. That's a gross imbalance in the most important part of the response curve. I'm all in favor of balanced frequency response using EQ. I just don't see why you need speakers to be able to accomplish that. Even if you get a nice response curve, it isn't going to sound anything like speakers. Most headphones are closer to balanced out of the box than most speakers anyway.
 
Last edited:
Jun 23, 2017 at 2:59 PM Post #14 of 55
I'm talking about out of the head sound. I've never heard it where it wasn't just a subjective impression based on secondary depth cues.
Sound perception is subjective by definition. And why "secondary"? What about "primary" depth cues? :relaxed: Or do you mean simulated as opposed to real?

I don't see how a frequency imbalance could affect the placement of a vocal.
So to you it's not intuitively obvious that someone singing at you from farther away sounds quieter than someone closer to you. Or that the brain would draw conclusions from this kind of clue because of how long it's been trained to process things in that way.

It would have to be an imbalance that spanned several octaves and was off by more than 3dB.
Sometimes the brain guesses things correctly even based on imperfect information. Maybe you don't need every single bit of energy coming out of that singer's mouth to be affected by the frequency imbalance under discussion, just a subset of the loudest vocal frequencies produced by that particular singer in that particular recording (mind you, I never said this is an effect that will work equally with all vocals, all singers, all songs and all notes sung; it could last for one verse, for example, and disappear for the rest of the song; still on the subject of simplistic things like forward/recessed mids).
 
Last edited:
Jun 23, 2017 at 3:22 PM Post #15 of 55
The only headphone that I've read offers a truly frontal soundstage are the AKGk1000s, which appear to be nothing more than speakers worn on the head.

25394.jpg


Some would believe this to be a relatively modern advance, but the phenomenon is not a new one.

boombox-guy-80s.jpg


I believe there are acoustic limitations to the enclosure (even for open back) and 90 degree driver angle that make soundstage impossible in a headphone, or severely limited. Hence these "solutions". That said, there seem to be some correlation between distance to the ear drum and sensation of being outside the head. An iem sounds more in my head to me than an over-ear can, and I'm sure the AKG above would sound different too. However, between two similarly constructed over-ear phones, I can't say I've noticed much difference.
 

Users who are viewing this thread

Back
Top