1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.

    Dismiss Notice

Frontal sound AND correct frequency response with EQ only.

Discussion in 'Sound Science' started by abm0, Jun 23, 2017.
First
 
Back
1 2
4
Next
 
Last
  1. abm0
    New observation and change to the method:
    The original instructions say to leave the balance control alone when tuning the EQL compensation for the speakers but use it to equalize L and R when tuning for the headphones. Today I realized that what that does is it artificially "heals" my asymmetric hearing loss and presents my brain with a fully balanced signal of a kind it's not heard in 5 years, and which it's no longer adapted to judge as "natural" (to retrieve accurate positional information from). :D So I proceeded to remove all balance information from my headphone tuning, re-generated the impulse response files and voila: definite improvement in how natural it sounds. With this and some more frequency band fine-tuning I'd done today, I'm finally understanding why Griesinger used the words "you're there" in describing the experience of listening to his symphonic examples. I've mentioned his Cologne recordings before - the ones in the separate pptx from the Dropbox - but actually the most impressive for me is the opera example on slide 45 of his Berlin presentation, preferably the "all minus first reflection" corrected version at the end: it's... just... glorious. She and the orchestra are so obviously in front of me it gave me tingles across the scalp like no ASMR video ever could. :p (Not to say it's perfect, but it's soo close now compared to anything I've heard on headphones before...)
     
  2. buonassi
    I use a controllable sine wav sweeper in my AU chain (Mac, sorry). I also have banks that I can program to toggle frequencies back and forth between the reference loudness frequency (ie 500hz). The eq-ing takes time because I have to play until the Q is just right, but the results are phenomenal, though I haven't used Griesinger's method just yet.

    But to comment on your question, my simple sine wave sweep method doesn't equalize to each ear individually, so I'm probably not getting the benefits that lead to greater frontal image. Plus, I'm not getting any HRTF compensation baked into my final curve (which he takes from the near-field data). This man, Griesinger, seems to have published a nice method for hitting all the things we audiophiles worry about - and possibly one that can convert the snobbiest of speaker-lovers into headphone nuts like us.

    • HRTF and "naturalness", Harman Target etc. (solved)
    • timbre and grainy treble (solved)
    • no frontal image (solved)

    ....at least in theory. I'll try this method out and see if I like it, or if, like @castleofargh , I have issues accepting equal loudness as the holy grail. I too have become accustomed to a signature that I'd probably have a hard time dropping.
     
  3. buonassi
    one concern I do have for this method is the ability to create a compensatory curve that has the steep Q values I need given my ear resonances. Using the 1/3 octave approach may not work for the null I experience at 8200hz on most cans I own. The Q is steep and nearby frequencies can become too loud if it's broadened. Here's a pic to show you what I mean.
    hd600 high freq.jpeg
     
  4. abm0
    Again, the ideal is not to remove the ear resonances - your brain expects to hear those in order to judge the sound as "natural". So you tune them out when doing the equal loudness curve for the speaker(s), tune them out again when you do the one for the headphones, and when the final compensation is created by subtraction of the former from the latter, in principle this part of the tuning cancels out and the resonance phenomena are left to manifest naturally (since they're always in the path of the soundwave, or at least the ear canal resonance is). You actually don't want it to be perceptually-flat, that's not natural. The ear resonances should stay as they are.
     
    Last edited: Jun 26, 2017
  5. castleofargh Contributor
    also the equal loudness contour if we are good enough(I'm not despite spending so many hours doing similar things with test tones over the years), cancels out and only serves as reference both times.
    but I have a hard time setting the low end on speakers and headphone the same way. maybe I should leave the speaker ON and try with very isolating IEMs to still have the sub freqs hit my body? because right now it's a mess. I end up with way too much low end boost on the headphone, or if I try to set the speaker calibration based only on what I think I hear while pretending I can ignore the physical shaking of the body, soon enough the windows start shaking and I'm still not there yet ^_^. I think I will just leave both ends of the frequency range alone next time I try.
     
  6. abm0
    Yeah, I quickly realized I wouldn't be able to tune either one to actual equal loudness all the way down to 31 Hz simply because they don't have the requisite technical capabilities. (My near-fields are -3 dB at 49 Hz and my small Denon Envaya Mini is about -5 dB at 70-ish Hz. Headphones aren't perfect either but they do have more extension, especially the HE-400i.)

    I think the correct thing to do is to tune each individual device only down to the lowest frequency it can reproduce comfortably (+/- 3 dB) and leave all bands set at 0 dB below that. Keep in mind that when tuning your speaker EQL compensation you're actually capturing the information about your HRTF, so if your speaker is weak at 40 Hz and you boost that frequency to make it equally loud you're actually telling "the genie in the bottle" that your personal HRTF is bad at hearing the 40 Hz frequency and that this deficiency should be replicated in the final compensation curve to make things sound natural to you. :D This is putting false information into the mix. After calibration your speaker is assumed to be perfectly flat all the way, so you shouldn't introduce any information contradicting this unless it comes from your own HRTF - any other known speaker deficiencies are to be ignored (if possible, if you have the necessary speaker specs on hand).

    LE:
    Oh, and another thing: it's easy to forget that the default ("0 dB") level of each noise band used for tuning has been set based on a 70 phon (or so) population-average equal loudness curve. So in principle one should always start any tuning session by adjusting the volume of the reference 500 Hz band to get it to about 70 dB_SPL, otherwise the results may come out distorted, especially toward the bass end, where the different-level equal-loudness curves diverge the most. (I myself completely forgot about it after the first tuning session. :laughing: )
     
    Last edited: Jun 26, 2017
  7. abm0
    Methinks you shouldn't ignore the physical shaking. :relaxed:

    But to address this more directly than I did above: with the early tunings I had when I started this topic I quickly realized I was judging frequency bands' comparative loudness pretty poorly and that was keeping me still off the mark, especially for the more extreme bands, far from the reference at 500. So what I did next was I resolved to make the same kind of estimation errors in both EQL compensation curves so that they would cancel out on subtraction: I re-tuned both speaker and headphone, and for every band where I was having trouble judging comparative loudness I would first turn it up until it sounded clearly louder than the reference, and then I would gradually adjust downward until it seemed "the same". Any errors introduced this way would be overestimations in both curves, so overestimation minus overestimation = pretty-much-spot-on, and my results improved immediately. :relaxed:
     
    Last edited: Jun 27, 2017
  8. bigshot
    I always find that it's best when you EQ to start at a moderate volume and work your way up in volume in sweeps. Your ears don't get fried by gross imbalances. You correct them at a lower level before they become a ear piercing spike.
     
  9. frodeni
    I simply do not get the physics of this. The video only speak about loudness, as it is supposed to be an linear thing with headsets. It is not. This only adjust for a single sound, and the experienced loudness of that sound. If applying any EQ, that will also alter say sound being at -10db as to what is being adjusted. It is absolutely not given, that the response is linear. The usual understanding is that there is something called dynamic range and varied level of rendering of details. It is simply not linear.

    Also, there is an understanding that there is a perceived loudness difference, in that dynamic range, which is typically used when arguing about the db-a scale, to name one.

    In short using the ear as a measuring device for loudness, is bloody tricky. It is insanely hard to do, as the ear is a relic form the cave man, honed for the need of the man living in the bushes.

    Which lead me to the second point: Position is also determined by phase shift. The sound hits the ear with a difference in time. Sound also travels at about 320-340m/s, while at it greatest, the time difference, or the phase shift, will be in the milliseconds between the ears. This is often lost on people. Lets say the distance between the ears is 1 meter, the time needed to travel a meter would be 1/320s. 10cm would be 1/3200s and say 20cm would be about 1/1600s. For a head with 20cm between the ears, that would leave you with a sensitivity way beyond 1/1600s, as to phase shifts to work at all. I have not seen the recent studies on this topic, but read about it in several books back in the 1980s. I simply do not know where this leaves us in the never ending sampling debate. (Anyone?)

    The combination of loudness and phase shift, is used to calculate the origin of the sound. If you have not noticed yet, just adjusting loudness, does not give you any spacial reproduction using headphones. The reason is the lack of phase shift.

    Does this guy deal with that at all?

    Real adjustments will only arrive once vector sound arrives. If a sound originates at a certain distance, 20 degrees above the horizon, 30 degrees to right, at 20 meters, that will result in a specific phase shift and amplitude difference. Reproducing that, you will be able to tell exactly where the sound is coming from, and adjustments can be made, simply comparing the spatial experience, with the clapping of hands at the location. Because that is what the ear is made for. Hunting for food and avoiding danger.

    Also, the ear is known for hearing if something is approaching or leaving. That is because the sound of a car is different if coming at you, as opposed to moving away from you. Just like an animal coming at you. Or a a human foe. That is actually a frequency shift, due to the slow speed of sound, that result in a slight frequency shift even at fairly low speeds.

    In case people have not noticed: People do not all share the exact same head physics. That is why I speak about vector sound reproduction. The phase shift and amplitude difference is individual, and actually change as you grow. Most heads are not perfectly symmetric either.

    Some recordings actually contain some phase shift. If you use two mics to reproduce stereo, there will be a phase difference. That difference simply cannot be compensated by any EQ settings, and given the difference between humans, phase and amplitude differences will be experienced differently, as the reproduction will differ differently from what is normal for the individual.

    Also, for in-ears, there is always some issues with how accurately they fit my ears. Sometimes I push them farther in, sometimes they sip out, over time. My point is, phase reproduction is not stable over time, at least for all my in-ears. Soundstage is simply inaccurate, for any in-ear that I have ever tested or owned. That include the top brands like Sennheizer and Shure two years back, for their best sets.

    People do not seem to realize the potential of vector sound for headphones. It is simply mind blowing. All you have to do, is to record each instrument or voice in mono, in as acoustically dead studio as possible. Then you calculate the phase shift, frequency shift, and amplitude shift for the individual listener. You then got both position space and movement. Environmental sounds may also be added (room acoustics), for each sound source. If you add a gyroscope to the headset, the rendered position in space will remain for the source, even if you turn your head, if done on the fly. That is 4D rendering, as it is position by three axis + direction of movement. Also, if you include rotation of the source, you add more dimensions, as game developers probably would want to.

    Humans also determine position of the sound source by the sound that hits him first, not the the loudest part. Remember, we used to live in the wild. The sound reaching you first, is the one giving you the real location of the source. It is simply not about just the loudest sounds, once phase is factored in.

    People need to move beyond amplitude. There is more to it than that, and why the industry is still stuck in tech from 1970s, I do not get. Innovation has stopped. If vector reproduction was a thing, then both amplitude and phase could be determined, simply by comparing reproduction to what it should reproduce. That could also be corrected for both loud and muted sound. Remember, two sources may be calculated at the same time, offering advanced and complex testing, yet pretty simple at its core.

    As for what is promoted in this video, it sort of reminds me about manual calibration of computer screens: It simply does not work. A measuring device is killing any manual calibration. There is simply so many things to consider. Also, a color profile is hard to make up by hand. Using vector sound, a listener profile would be fairly straight forward to make, and it would probably be a listener with a specific headphone type of profile.

    The ear was made to tell you which bird sang the loudest. It was made to aid you to hear where the bird is, how far it is, at what angle and altitude, and how it moves. Simply because that is vital during any hunting. If you can hear the position of the bird, that will aid you for dinner.
     
    ev13wt likes this.
  10. abm0
    Of course he does. Phase shift is not audible unless it comes with specific changes in frequency response, and he's doing exactly what it takes to reproduce the phase effects in audible form: shaping the frequency response with a normal (minimum-phase) EQ.

    Not sure what this magic step of "then you calculate" is supposed to contain, but I get the feeling you're reinventing the Smyth Realizer. :relaxed: At any rate, it seems far more complicated than the EQ method.
     
    Last edited: Jun 29, 2017
  11. frodeni

    Why post in this kind of berating language, and argue that phase shift is impossible to hear, and then in the next paragraph, argue that phase shift is a critical part of our hearing ability?
     
  12. castleofargh Contributor
    the exercise proposed here is with a speaker right in front of us at a close distance. we're most sensitive to phase shifts between each ears, something that shouldn't matter in this specific case. we're not nearly as sensitive to phase shift when it happens to be the same in both ears. else moving our head even slightly in real life should mess up the sound. and everybody would treat speaker crossover like it's crap.

    also the method offered here doesn't pretend to be a Smyth Realizer, it's still just an EQ and a random headphone. of course it's not going to do everything right, the idea is to try and see if if feels more natural to the listener than the non EQed headphone.
     
  13. buonassi
    Your words are resonating with me - no pun. I believed the resonances I was referring to weren't natural in the sense that they are due to (what I thought was) the headphone cavity in the cups being so small. But you may be onto something.... I tend to have the same aversion and affinity to similar frequencies regardless of in-ears or over-ears phones, but have no issues when sweeping frequencies while in my studio using monitors. So, it has to be an HRTF thing! I tried the equal loudness technique, albeit without free-field speaker compensation, and I was rather happy with the results when I listened to music as a whole - not focusing on the individual peaks/nulls at very very narrow frequency bands.

    So, the technique is solid and I thank you for the contribution! I wish someone would develop an android system wide EQ that did does this - without needing root privileges for Viper. I can't root this particular device I use
     
  14. abm0
    buonassi likes this.
  15. abm0
    The next best thing when that's not available, I've found, is to close my eyes and remove the distraction of the actual objects I have in front of me. Depending on the recording (and I'm talking all flat-stereo here) the frontality could just happen on its own or it might take some extra help in the form of making a little effort to imagine the artists in front of you. But just closing the eyes is an easy trick that can help the effect along. With this and my latest tunings I'm just loving my headphones to bits, down to my cheapest ones - the KSC75 and the HD669. :)
     
    Last edited: Jul 5, 2017
First
 
Back
1 2
4
Next
 
Last

Share This Page