If you play this back through, say, an iPhone or FiiO X5 and the HD650s you'll get the sound seeming to be just a blob in the middle, as both devices can't generate enough voltage swing for the headphones. On the other end of the scale, with a good amp you'll get a good, wide soundstage (or maybe "headstage") impression. As the quality of the gear improves, each instrument seems to have a more precise place in the stereo "space". That's what I've discovered.
Aside from that, the particular frequency response of a pair of headphones will affect our perception of the soundstage as our brain interprets the strength of different frequencies as indicators of the distance of sounds, which is why, crazy as it may sound, you can talk about "soundstage" even with IEMs, as you can fake the impression of it by tweaking the frequency response. It's quite a fascinating topic.
I'm gonna have a shot at clearing-up this soundstage issue once and for all.
I simply can't see how the X5 or for that matter an iPhone would turn the soundstage of the HD650 into a "blob" (as you call it). That simply can't be. I did the test today with an HD650. I've deliberately used an iPOD Nano with multiple different headphones and there was no deleterious affect whatsoever on the soundstage! The soundstage was completely in tact, wide and spacious imaging was prevalent, because actually the iPod has a very neutral and 'detailed' DAC from what i can hear, thus proving my point in my previous post, a detailed DAC and headphone will preserve the soundstage by default, so your example or understanding of what gear can impact on soundstage doesn't make sense to me.
The only thing i can see which might be affected is the 'PRAT' of the headphone due to the iPhone or iPOD not having enough power to drive the HD650 properly, thus affecting 'Pace' 'Rhythm' and 'Timing' (PRaT), which is what you alluded to when mentioning the result of the powerful Tube Amp giving the "Swing", and therefore it retained the stereo-phase of the frequencies and subsequently the soundstage, but i will reiterate based on what i know... there is not a headphone on the planet that can give a soundstage 'beyond' what is in the original song; however, one way a headphone might give a 'pseudo' impression of better soundstage compared to other headphones is the 'tuning' of the driver (as you mentioned about the frequency response), when they add or detract EQ during the tuning, they can for example easily boost anywhere from around the 3.5kHz mark upward, and this would give a false impression of 'air', which is exactly what we do during the Mastering of a music track by using a High-Shelf EQ, but in the case of headphone-tuning, these companies can to some extent inadvertently create an 'artificial' or 'pseudo' sense of what might be deemed soundstage, but really this can be a 'quasi' sense of better soundstage, and only in comparison to another headphone which might be more veiled and lack air in the top-end due to its tuning by the manufacturer. Why do i say this? Because as a producer i listen intently for many hours week after week inside songs during production, intricate details affecting the soundstage, labouring to build the soundstage into the song during what is called the "Mixing phase" of production, thus i am intimately acquainted with what it is that creates soundstage in any song, and i can tell you that this is created solely by use of "Panning" and two effects plugins - Delay (aka Echo) and "Reverb" - or in the case of Orchestral music the space in which it is recorded, which again is actually the "Reverb".
Therefore, assuming a DAC is presenting at least 95% of the details in the song, then 'ANY' headphone which might provide this detail will by default have superb soundstage, regardless of the cost of the headphone.
Panning creates differentiation of the separate elements 'across' the stereo spectrum left-to-right, whereas Reverb gives our senses an impression of an instrument sitting in a certain space (whether in the background or the foreground or an open space) like sounding as if it's in a large Hall etc, thus provding a sense of 'depth' to our ears, therefore "Panning" plus "Reverb" equal "Soundstage". Consequently, any headphone that might have a skewed frequency plot or EQ curve can for example adversely affect soundstage in inherent in a song merely by veiling subtle frequencies in the Reverb algorithm used in song production, which is responsible for "depth perception", but believe me, if a synth-sound is panned hard-left and hard-right in the song (regardless of whether Reverb is inserted on it or not), then 'any' headphone will correctly convey 99% that stereo-spread, and thus present correct soundstage by default in relation to "panning" of song elements in the stereo-image.
The only way there seems to be variation in soundstage between headphones (which are somewhat mild in difference anyway) is if the frequency plot or EQ tuning of the headphone might for example 'veil' subtle details in the Reverb, thus reducing soundstage, or conversely boost subtle details in the Reverb, and thus increasing soundstage somewhat; to some degree EQ tuning of a headphone can affect for example any subtle and bright characteristic elements of decay-tail inherent in the Reverb, thus affecting the 'perception' of Soundstage.
Of course we could discuss other things, like what is known to many Hip-Hop and EDM producers as "phatness" of the sounds, meaning width and 'body' in the sound-elements used for producing the song, and EQ tuning in a headphone will affect that also, but again i would say this is all down to the EQ plots the headphone company resorts to or implements; however, this will affect so-called "soundstage" to a much lesser degree.