This thread may be of interest to you:
https://www.audiosciencereview.com/forum/index.php?threads/stereo-to-binaural-via-hrtf.19968/
https://www.audiosciencereview.com/forum/index.php?threads/stereo-to-binaural-via-hrtf.19968/
This is exactly what I was using maybe 5 years ago? It wasn't exactly my own but closer than something ever came to be, and the measured stuff are at about 1m away if I remember. The main trick is to focus on the impression of the demo buzz at around 30° on each side(distance? Does it feel elevated?), because to use with stereo, the idea will be to pick the impulses for those angles to get the usual speaker panning. How good it feels behind us is meaningless for that. In my case, the HRTF closest to me had some serious imbalance between left and right at 30 and 330, so I ended up having to take only one, and make switch the channels of the file to have the same thing for the other ear. It wasn't amazingly accurate, but better than using the 2 .wav from that guy. It worked pretty well and I preferred that to any generic HRTF processing(using some dummy head as reference), probably because my own body isn't all that standard to begin with. But for many people, some generic solution will work nicely. Customization is all the more relevant as we part way from some averaged human head(in size or shape). I also liked using those impulses better than any more basic crossfeed solutions.This thread may be of interest to you:
https://www.audiosciencereview.com/forum/index.php?threads/stereo-to-binaural-via-hrtf.19968/
Now I have an A16 which is a step up in all directions, including the price going from 0 to .
It's strictly a speaker/room simulator. You measure the sound at your ears from specific speakers at specific locations in a given room. That's what it will try to reproduce with your headphone(that you would also have measured with the same mics in your ears).Does the A16 sound like speakers playing music, or just like the musicians playing in real life?
You are right, as most headphones have pretty poor matching left to right, in level and response. To get better than ±1.5dB is surprisingly rare.As our subjective impression comes from a pudding of audio cues(and non audio one), it's hard to come up with one sure answer. I'd always point at the frequency response for main suspect, simply because we encounter pretty wild variations from one bud to the next(or one headphone to the next). As most of our localization cues make use of frequency response, it's fair to assume some amount of impact in that respect when the signature changes significantly.
But other stuff could be at play sometimes. Some excessive distortions maybe? Or if your DAP already has a poor crosstalk spec unloaded, and the earbud has very low impedance. Then the effective amount of crosstalk might end up being really high and change the perceived presentation, or how distinct the instruments might feel to you.
I mention this as an example, but don't become paranoid about crosstalk, it's usually not even worth looking that spec up. I'm just brainstorming and presenting possibilities I can think about.