1) Soundstage and sound location is a function of the mix and the acoustics between the transducers and the listener, not the recording medium. Usually when people say "better soundstage" they mean "better expectation bias". Articulation would be covered under distortion. All digital formats have inaudible levels of distortion.
2) All audible sounds, both simple and complex are *perfectly* reconstructed with 16/44.1 and can be represented as sine waves. All of them. Again, not being able to accurately reconstruct sign waves would fall under the category of distortion. See #1.
See how interesting it is to hang out in Sound Science! You learn something new every day!
Well, if you did know the understanding of soundstage and imaging, then you would know that it is the placement of instruments in space, as experienced by the listener. As a term in a terminology, it belongs in the subjective field of science.
As for what you are speaking of, which probably is supposed to be the physical reproduction of soundwaves, there is no soundstage. At best the reproduction is very limited, there are some phase shifting, but given the nature of humans, that shift cannot be static. Nor can the amplitude difference be. And that is before the acoustics you speak of.
And no, I did not learn anything from what you wrote, as I was told this in the 80s. I find the physics of it intriguing, and the lack of the physics being reflected by real world gear, even more interesting. But as they say in the military, if the landscape does not fit the map, there is something wrong with the landscape.
I am not sure what the right landscape is.
You're going to need to define 'vector sound'. Also, the sine-wavy aspect of all this is due to Fourier analysis, and I task you to get a voltage function out of a mic that doesn't have a Fourier decomposition...
The very basic physics of hearing, is the theory of phase shift and amplitude shift between the ears, as to be able to position the origin of sounds. To achieve this for humans, the individuals distance between the ears, is a minimum to consider. So if a sound source is 10m away, 10 deg up, and 46 deg to the left, that will result in specific phase shift and amplitude shift, that is unique for the individual. (well, not exactly unique, given these parameters, but not even close to equal for all humans.) In vector sound reproduction, the phase and amplitude shift is calculated to simulate the physics of hearing, for the individual. If using a gyroscope, and done on the fly, the experienced source will be a fixed position in space. The calculation is done by distance vectors.
Also, movement plays a role, as artifacts moving towards you, at a certain speed, actually get a phase distortion, as to change in wavelength due to the movement. Just like a car coming at you, or moving away from you. Again, this can be done by vector calculation.
There is a whole hosts of things that can be added to the reproduction. Sometime in a not so distant future, someone will introduce vector sound. Hopefully, since I speak of it in public, they cannot patent it. They cannot patent it for headsets, nor automatic distance calculation between the cans using ultrasound, or any sort of waves. Because that is given in the public domain. The use of gyroscope or any type of device, to register head movement, to assist for vector reproduction, well, it is in the public domain now. It will happen. Particularly since the only real change in the industry, is a shift to mono recordings of individual sounds, while the rest of the infrastructure only needs minor adjustments.
AMD has an API for vector sound, but it only includes amplitude shifts. It has no reading of listeners dynamics at all, as in distance between the ears.
This sound tech, used in combination of see trough VR, combined with great positioning, makes my head spin with ideas. Not just for music. Particularly augmented reality. Why there is no rush in the industry to be the first at this tech, shows a complete lack of visionaries.
Given the insane variance using 16/44.1 for classic stereo, that variance indicates, to me, that we probably need more for vector sound. If we don't, that is great news, as vector sound will arrive earlier then.
This also gives the reason for these in dummy head recordings do not work. In general. They are close to work, if you got the exact right head dimensions, something which is forgotten by the fans. And as with most things fans of this type of recordings, particularly those with heads that fit a certain recording, they do not listen to the facts presented.
As for your dragging Fourier into this, I miss your point. I have lived long enough, to have seen an ellipse being represented by multiple circles. Sure, if that is mathematically possible, then it is. Trouble is, that when things are really understood, or so we thought, the movement of the planets had nothing to do with circles. The answer was of a completely different nature. As is the complexity of sounds. Well, again, this is not really understood at all, and maybe the placement of sounds is derived by some Fourier like process by humans. We just do not know.
What we do know, is that music or sound reproduction is far more complex than a single sine wave. As is having a dog, cat, or hamster, but mixing the three?