1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.

    Dismiss Notice

Out Of Your Head - new virtual surround simulator

Discussion in 'Computer Audio' started by project86, Nov 7, 2013.
First
 
Back
67 68 69 70 71 72 73 74 75 76
  1. johnn29
    When I get some time I'll give OOYH another install and get it working.

    I was obviously making a big mistake trying to use a Harman Target curve with any HRTF. Flat is what's needed - you can use AutoEQ to compute a flat EQ for your headphones from various measurement sources - I found the sound bang on after that. No need to tweak EQ. What I was obviously doing before was using a curve that had a built in boost in the treble range, then EQing the treble down to make it sound right.

    Darrin - might be worth suggesting that or building an EQ to apply the flat target in your software?

    phoenixdogfan - I agree completely - Impulcifier's author has virtual room correction forthcoming. In principle a virtual room should blow away a real room.
     
  2. castleofargh Contributor
    not long ago someone showed me a demo of Spat Revolution from Flux, which is exactly what you're talking about in term of creating a virtual room. it's mostly virtual speakers to real speakers kind of simulations, but there are a few binaural options where it seemed that one could even import his HRTF file to use for the virtual speakers converted to headphone stereo instead of some standard HRTF used most of the time on surround stuff and basic head tracking solutions like Waves NX.
    as I said somewhere else, it's a production tool, and the price isn't fun. but just to say that such solutions do exist already. now you have to use built in models or bring your own, which brings us to the second part: measuring sound VS making a model for 3D scan.
    as far as I know, measurements are still the most effective/accurate way to go. probably in part because even 3D models must have been built based on acoustic measurements, putting them one step further in term of approximation of an approximation. but at some point obviously it will be the other way around as we ideally would get rid of issues with placement, noises, and mic calibration. it just doesn't seem like we're there yet.
     
  3. jaakkopasanen
    It's true that speakers imprint some of their problems to impulse response measurement but also some, like harmonic distortion, are negated quite well by the exponential sine sweep technique. I don't know what kind of effects crossover perfomance and directivity have on the impulse response and how these could be compensated for. Room acoustics on the other hand has direct effect on the impulse response and many of us don't have great rooms because certain compromises have to made when building a listening room. The nice thing about this is that you don't actually have own the room or the speakers if you know a place where you can go and do the measurement once, just like OOYH does. Another fortunate consequence of speaker and room virtualization is that cross-talk and causality are not problems for room correction. For example in a real room you cannot have separate equalization curves for left and right because both ears will hear sound from all speakers. In a virtualized system this is possible because impulse responses to each ear are separated and can therefore be equalized separately. It should even be possible to negate standing waves from the impulse responses with a bandpass filter which tracks the sine sweep although I haven't tried this yet so can't say if there will be practical reasons why this wouldn't work.

    Genelec has actually just announced their service for generating a HRTF from a video which goes 360 around your head. There is a thread about it here: https://www.head-fi.org/threads/genelec-aural-id.903304/. Unfortunately it's 500€ plus VAT. While it might work quite well at least one problem remains: headphone compensation. Obviously HRTF generated from a video won't compensate for the frequency response of your headphones. Even an eq curve made from publically available curves won't solve the problem completely because headphones have variation between units and what worse variation between left and right side drivers of the same headphone. This might seem like nitpicking but we are not talking about how 3D model based HRTF could overcome the limitations of acoustical measurements so we can't ignore the unit variance.

    Hefio advertizes that they are working on "a new generation individualized headphone calibration technology that delivers greater tonal & spatial accuracy in sound reproduction than any other commercial solution available today." It's hard to say what this is exactly because they haven't announced anything yet. Earlier they built an IEM which does pretty much exactly what Nuraphone does so they certainly have the knowledge to build a headphone calibration system which could be used with 3D based HRTFs. Hefio was previosly (and maybe still is) partnering with Genelec and IDA Audio so this could actually be exactly what they are doing. Time will tell.

    Until we have an affordable product which ties all this together, acoustic measurements serve as very good approximation of the speaker-room system with potential for virtual room correction that surpasses all physical rooms.
     
    castleofargh likes this.
  4. arnaud Contributor
    I wish it were that simple. I can make HRTFs simulations rather easily but half the battle is the acoustic impedance of the surfaces (not just the geometry). Perhaps there are approximate values that can be used (maybe some published papers that validated ear canal SPL predictions against test data?) but I am not aware of them ( and I wonder if such impedance data also varies very much among individuals...).
     
First
 
Back
67 68 69 70 71 72 73 74 75 76

Share This Page