Speakers can be virtualized (simulated) with headphones very convincingly with impulse responses and convolution software. This however requires the impulse responses to be measured for the individual listener and headphones. I'm trying to achieve this. I made impulse response recordings by playing sine sweep on left and right speaker separately and measuring it with two ear canal blocking microphones. I turned these sweep recordings into impulse responses with Voxengo deconvolver software. I also measured my headphones the same way and compensated their frequency response with an EQ by inverting the frequency response as heard by the same microphones. Impulse responses are quite good and certainly better than any other out of the box impulse response I have ever heard. However they are suffering a little by coarseness, sound signature is a bit bright and sound localization is a tad fuzzy. When listening on headphones a music recording which was recorded with the mics in ears while playing the music on speakers the result is practically indistinguishable from actually listening to speakers. My impulse responses and convolution come close but still leaves me wanting for better. I think the main problem might be the noise introduced by my motherboard mic input. I thought about using digital voice recorder like Zoom H1n for the job. This model can do overdub recordings with zero delay between the playback and recording making it possible even to record each speaker separately. I'm also assuming that the mic input on this thing is quite a bit better than my PCs motherboard. Does is seem like sensible idea to use voice recorder and are there better options? Can you think of other sources of error than the noise from the mic input? Should I do some digital noise filtering on the sine sweep recording before running the deconvolution? Any other ideas for improving the impulse responses?