Hmmm. My hunch is that Spotify are not giving a true 320 kbps.
Im not tech savvy enough to test. It would be great if someone who is could have a look.
What is exactly missing? Do you have a tune, and may you please explain what we may listen for in that tune, that will give us a hint as to what you speak of?
I used Spotify, as usual, yesterday to fine new tunes. I came across "Rain From Heaven" by Eric Paslay. I actually played that beauty close to 20 times, both on Spotty and Tidal. For every single instrument there is differences. I used both on max SQ.
The guitar at the begging loose a lot of its higher pitched overtones, the string one to the right. Also, the base play for that guitar is almost lost for Spotty, rather, I needed to listen very carefully as to if it was there at all. It is, just not how it should be. That guitar rendering is wrong and out of tune. Much of the guitar rendering is simply not there with Spotify.
As usual, the finer after effects, like echo, or subtle effects as to the imaging width, applies to all instruments. There is a electric guitar used for this, on the far left, and a piano sound. On Spoty, the rendering is lacking in articulation, there is more black silence surrounding these. The imaging is more pinpoint.
There is also a clear lack of attack. Accross the entire frequency range. It is clearly heard on the cymbals and the guitar. Tidal reproduction is harsher for instruments that needs it.
Tonality is different. Due to lack of finer details and the lack of articulation, different instruments gets focus through the tune. The subtle lack of environmental effects, results in more silence between the instruments and vocals. Also, the lack of attack, alters the perception of beat, and what makes the beat.
I could go on like this for quite some time. In the end, a pattern of less articulation, seems to describe the difference the best. It seems to be the root for the differences. And no, volume has nothing to do with it, simply because the relation and emphasis between instruments and voices is changed. The tonality is off. Adjusting the volume for the main voice, make other parts in the rendering off by volume.
And how do I volume adjust base or the drums, when it by Spotify lacks articulation and thus sound differently?
The listening was done using my PC as a source. Listening on the Oppo HA-1, connected by AQ Coffee USB, Heimdal2, and HD800. This of course should make people jump at me, but I choose to leave it as it is, as this is about the perceived difference, not if there is one, because that is a given. (hint:ASIO and Windows)
So, it would be nice to get a sample tune, and a description of what the difference is.
I can listen to the same tune, replicating your settings, and then compare notes.
Also, Spotify might simply add an enhancement filter to their output. Just comparing output, is simply not proof of much. The encoding algorithm might also differ, and the source of the transcoding. If there is a difference, it needs to be understood, to make any meaningful claim. Objective data has the annoying trait, that it needs to be given meaning, to have one.