[1] To my shock and annoyance. I can pick it up pretty consistently,
[2] 24bit had more dimension to the sound stage while 16 bit was thinner. ... But to me 24/96 sound fuller and smoother than 16/44.1 definitely.
![Rage :rage: :rage:](https://cdn.head-fi.org/e/people/rage.svg)
... im hearing more depth, definitely
[3] I'm just shocked and annoyed that i believe i can hear a difference, since its against my best interest.
1. Can you really though? As others have said, you need a controlled blind/double blind test because it's trivially easy to differentiate if you know in advance which is which. I cover this more in #3 below.
2. As castleofargh stated, what you describe here is NOT what is actually happening (as explained in the OP); 24bit doesn't have more dimension to the sound stage, is not fuller, smoother or have more depth. The ONLY difference is that 16bit has a tiny bit of extra noise, which you won't be able to hear unless you listen at a playback level well above comfortable/reasonable and even then, only under certain circumstances.
3. This really is the heart of the matter and indeed, at the heart of more than a fair bit of what goes on in the audiophile world. On the one hand: Generally no one wants to admit they've been fooled, are imagining things or are not hearing accurately (especially as critical listening is a requisite for audiophiles) and in addition, the belief in what one hears/senses is strong, for example the old cliche that "seeing is believing", despite countless examples of optical illusions and that everyone is aware images can be and routinely are manipulated. For many audiophiles both of these add up to an unassailable belief in what they are hearing and being unassailable, anything/everything which disputes this belief must therefore be wrong; demonstrated facts, proven science, blind testing, objective measurements, everything, it doesn't matter! On the other hand though: Being fooled and imagining things or more precisely, what we think/believe we're hearing being changed/affected by biases (and therefore being inaccurate) is not only NOT a bad thing, it's actually vital! For 500 years or so, virtually all western music has been based on bias, namely the manipulation of expectation bias, the expected continuity of rhythms, the expectation of chord and melodic progressions and the resolution of dissonance. Without expectation bias affecting what we think we're hearing, it would be impossible to appreciate western music, it would all just sound like semi-random noise with absolutely no meaning or emotional impact. In other words, without bias affected hearing, music doesn't exist!
Thanks, i was curious to see if that consensus had changed at all since the original post in 2009. Not in regards to the science of file format as its well established, but more so with the hardware which has evolved. Unsure if hardware responded differently to different file resolution.
The hardware has evolved; in general (as with just about all modern technology), it's become better/more accurate or cheaper, or both. There's generally fewer incompetent digital audio devices, audibly transparent/perfect digital reproduction is even cheaper, the specifications of expensive devices have improved (though not audibly) and the issue of some devices operating better at higher sample rates, which was occasionally the case 10-15 years ago (to save money or through incompetence), is far less common today. What you believe you're hearing (a difference between 16/44 and 24/96) is possible, for example due to some seriously dodgy conversion software but that was relative unlikely even 10-15 years ago. So unless you're using conversion software that's ancient (and one of the dodgy ancient ones) and/or have made a serious error in the conversion settings, we can pretty much rule this out as a possibility.
G