Adding some reverb to a track would in general be done to give a feeling of a different and bigger room, but it might not do much to move the sound sources away from us or place them more accurately in our imagination. We make some use of reverb to locate things, like feeling when we're getting close to a wall based on how sounds bounce from it, or feeling like something is really far away(along with killing all the high frequencies) but it's not the main way we locate stuff in our brain.
For imaging and placing instruments, or any sound source at a given position in space around us, we mostly rely on delay, loudness and overall frequency response variations between each ear. Those things can clearly be modified with digital processing. So yes, digital processing can alter how and where we perceive a sound source. The idea of fixing is trickier because it implies that we know where something should be in the first place, when it's rarely the case, and that the human brain exclusively relies on sound to get those impressions(which it does not).
We locate things mostly based on frequency and delay impacted by the shape and size of my head, ears, and where I'm looking at. intuitively we find that to improve our impression of spacial localization for sound sources, the best method should involve measurements at our ears, and ideally, head tracking so the all mental image doesn't break down anytime we move a little. The more of those stuff are involved, the more they are customized to your own body, the more convincing some digital processing can be.
The basis behind sound localization:
https://en.wikipedia.org/wiki/Head-related_transfer_function
And something I stole from a PDF(sorry to the author, I couldn't find the reference) to show the leading concepts of how the sound is altered:
If you plan to procure some binaural microphones or make some from cheap capsules, here is what a fellow Headfier has made and is sharing for free to try and simulate the sound of your speakers through your headphone:
https://www.head-fi.org/threads/recording-impulse-responses-for-speaker-virtualization.890719/
And the more advanced solution with head tracking and all the bells and whistles that costs an arm:
https://www.head-fi.org/threads/smyth-research-realiser-a16.807459/
In term of non customized HRTF, but with head tracking, you can find a few products like the Audeze Mobius, or even the latest Airpod thingy. The latter tries to track your head movement relatively to your cellphone or tablet(if I understood correctly, so it wouldn't become stupid if you were on a train or just walking outside). And there are several other products doing more of one thing, less of another. The basic concepts obviously remain the same because it's for humans.
So again, fixing might be a big ask, but altering and probably improving on the perceived imaging with DSPs, yes we can! Some do it pretty well. I'm a huge fan of the A16 myself and use it for hours almost everyday.