1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.

    Dismiss Notice

Recording Impulse Responses for Speaker Virtualization

Discussion in 'Sound Science' started by jaakkopasanen, Oct 9, 2018.
First
 
Back
1 2 3 4 5
7 8 9 10 11 12 13
Next
 
Last
  1. gregorio
    Short answer: "No". Long answer: "Virtually always no". There is potentially a circumstance where it could make a difference; a particularly old or dodgy resampling process, which could then affect a non-linear downstream process (such as a modelled vintage compressor) or of course an extremely large pitch shift. I don't recall ever having experienced such a set of circumstances personally but it is theoretically possible. I have recorded IRs at both 48 and 96kHz incidentally.

    It's not really that simple because it's not so much about the tools but about how one uses them. For example, you could get a set of the very best carpentry tools that money can buy but you're not going to start churning out Chipendales any time soon.

    While I don't know much about photography, in the case of music it's not really got much to do with recording "as accurately as possible". The aim is to end up with a recording that is as pleasing/subjectively "good" as possible and that commonly means not recording as accurately as possible. There are some very expensive mics out there but the most expensive ones are not very accurate and don't have particularly low self noise, in fact they're usually far less accurate and noisier than mics which are 10 or more times cheaper. The reason they're more expensive is because they have certain colourations (and other properties) which are desirable because under certain conditions they produce a subjectively better result than other mics. The question then becomes; which mics should we use in which circumstances/situations and how should we use and position them (relative to the sound source and each other)? As the situation always varies, either: Different instruments, different pieces, different recording venues, different musicians, different instrument positions within the venue or different artistic intentions (different musicians have different ideas on what is subjectively better), then the choice and/or use of mics always varies, so how do we know/learn which mics to use and how?

    Traditionally one got a job as the tea-boy in a top class studio, studied the literature, watched/learned what was going on, eventually becoming an assistant engineer being overseen and instructed by the chief engineer and then several years later becoming a chief engineer yourself. This way, decades of cumulative knowledge is passed along. In other words, even if one could "find victims to record" and had various equipment to experiment with, you'd need a few lifetimes to discover for yourself the cumulative knowledge that a typical chief recording engineer of a top class studio would have. So, could you "do the same", record an ensemble as well as a top class studio/recording team? It's possible but very unlikely. To start with, what's the "right equipment"?

    G
     
  2. jaakkopasanen
    Thanks, good to know! I remember reading somewhere that the deconvolution process would be somehow sensitive to sampling frequency, something like deconvolution cannot do good job above one 6th of the sampling frequency. This would put the limit to 8 kHz with 48 kHz sampling rate and to 16 kHz with 96 kHz. But I've never managed to find the reference again so it could be that I dreamed the whole thing.
     
  3. gregorio
    Ah, in that case I don't know. That's not something I've heard myself but then I'm not in that side of the business. I just use plugin reverbs (as a professional sound engineer), I have limited knowledge of what goes on under the hood. It could be that there is some non-linear process occurring that benefits from 96kHz and some of my reverbs do upsample internally (from 48kHz to 96kHz). All I can say is that I've had occasion to work with IRs that were recorded natively at both 96kHz and 48kHz and the end result (output of the convolution reverb at 48kHz) was audibly identical but I don't know what was going on under the hood inside the reverb plugin.

    G
     
  4. jaakkopasanen
  5. johnn29
    Those are gonna be great for the virtual room correction: before and after!
     
  6. Joe Bloggs Contributor
    All I can say is that I have the ear of a dev who's an EE graduate with heavy focus on signal processing and he told me your concern about sample rate pplies to some filter design tasks but definitely not to deconvolution.
     
  7. jaakkopasanen
    Excellent. This saves me some trouble then. Thanks a lot!
     
  8. gregorio
    To be fair, I don't have any concerns about 48kHz vs 96kHz for deconvolution (or in fact most other digital audio processes), because I've used digital reverbs for almost 30 years, convolution reverbs for around 20 years and have never detected any audible differences with them between these two sample rates. Furthermore, I can't think of anything within the convolution/deconvolution process which would produce any audible difference, I was just being thorough/honest by covering the possibility there's something I haven't thought of (and have never experienced). Some commercial reverb plugins do upsample 48kHz input internally to 96kHz but of course there are various reasons why this maybe preferable that have nothing to do with fidelity or audible differences.

    G
     
  9. johnn29
    Jaakko - I just measured a new HRIR with the updated improved method. It's remarkably easy now - so user friendly. The ability to measure my real 7 channel system without the spin-o-rama has also meant I get a much more accurate recording - even with my Bose 700s! The damn mics don't keep moving around cause my ass is firmly still. That combined with hooking the binural mics over my ear has made the whole process so much easier.

    Only suggestions I have from the process is when an error is thrown on running the 7 channel recording command it should prompt the user to generate the correct wav file. i.e. "no 7.1 sweep found, would you like to generate one? y/n"?

    I've been A vs B on my HRIR vs the speakers I just measured. I use MPC BE to output to both headphones and my speakers simultanously and mute the speakers as needed. They sound remarkably similar. I suspect the majority of the difference is due to me using BT close back headphones. I'm going to measure up my open backs shortly too.

    Seriously amazing job. I know you don't take donations but you saved me £4k on a Smyth AND I can use it anywhere in the world, on the metro, planes etc. So if you re-consider that I'm in!
     
    Last edited: Oct 17, 2019
    Zenvota likes this.
  10. jaakkopasanen
    Glad to hear and always grateful to receive feedback and improvement ideas.
     
  11. sander99
    I'd like to see your microphones very much but I only see your amp...
     
  12. arksergo
    One of them is on the first photo. I made it from replacement tips of my old sennheiser in-ear headphones. The shield wire is not needed so can be cut off.
    I will take better pictures this weekend.
     
  13. sander99
    I don't know why but I can only see the second photo in your post, and the text "[​IMG]
     
  14. arnaud Contributor
    I like the way you got these charts setup as it makes for easy intepretation! We see the peak in amplitude before 3sec, check that there's a hot spot at 100Hz in the spectrogram and verify that in the frequency response function...

    Is the processing the same on the spectrogram and waterfall or the waterfall a gated FFT from the reconstructed impulse response?

    Also, out of curiousity how to you compute the spectrogram? I've seen such graphs based on wavelet analysis but I wonder if for a sine sweep you can just make single block FFTs (like 100ms in your case it seems) as the sweep goes through?

    Finally, again out of curiosity but are these typical distortion figures you think (this is how I interpret the oblique traces at H2, H3 and expect it would be more obvious even on a 80dB scale or so)? I've never done such spectrogram measurements on headphone (at best did waterfall processing of impulse response).

    cheers,
    arnaud
     
  15. jaakkopasanen
    Spectrogram is calculated from the recorded exponential sine sweep and waterfall is calculated from the reconstructed impulse response. Both use overlapping Hanning windows for FFT. Spectrogram sets the FFT window length so that there is 10 Hz resolution and waterfall uses 300 ms or one tenth of the impulse response length, whichever is shorter. Spectrogram uses 200 segments on time axis and sets the window overlap accordingly. Waterfall uses 90% overlap. None of these numbers are based anything more scientific than that the graphs look pretty good with the selected values.

    I don't know what are typical distortion numbers but in the decay graph you can see that the 2 harmonic component is at -55 dB so I guess that's ok.

    ps. Room compensation has been implemented and it sound really good. It's not documented yet and I got it working only for stereo setup now, but that could be the surround setup recodings. Will look into it and try to get 7.1 demo files ready with room correction.
     
First
 
Back
1 2 3 4 5
7 8 9 10 11 12 13
Next
 
Last

Share This Page