If you make a measurement of the frequency response of a loudspeaker playing in front of you using mics placed in your ears then EQ phones to have the same frequency response playing into your ear, you should be able to make mono recordings have the same soundstage depth (out-of-head spatialization) as that mono speaker. To recreate the soundstage of stereo you need to fiddle with crossfeed. A Smyth Realizer creates a very lifelike soundstage with its digital manipulation of the stereo signal based on the measurements they make for you at the studio. It does more than EQ, but I think EQ can help with soundstage to the extent that choosing different headphones can help with soundstage.
That said the method outlined in this thread doesn't really help with that; at its best it lets you balance out all frequencies to an extent not possible even in a real sound field; sort of an audio analog to High Dynamic Range photography. I think this method helps create the ultimate detailed soundfield (where you can hear absolutely every instrument in just the right proportion, none masking the others) but spatial imaging isn't really dealt with here.
Edited by Joe Bloggs - 10/14/12 at 2:37am