Music Alchemist
Pokémon trainer of headphones
- Joined
- Dec 17, 2013
- Posts
- 20,092
- Likes
- 2,299
What you're describing doesn't reflect how lossy works. There isn't an overall distortion at medium bitrates. It's momentary artifacting in specific parts of the music caused by too little bandwidth available to render the sound. Either the sound can be rendered or it goes splat for a second. As you raise the bitrate, the artifacts become fewer and fewer until they disappear entirely.
Overall coloration of the sound or higher distortion overall at high bitrates doesn't sound like lossy artifacting. It sounds like expectation bias, problems with level matching, too much time between samples (auditory memory), or some sort of distortion being added by the equipment itself.
It is much better to match levels using measurements. You can get a ballpark idea by balancing by ear, but if the differences seem subtle to you, it's entirely possible that the levels aren't balanced perfectly and there really isn't a difference at all. That's how bias works. Everyone is subject to it.
I don't know about other encoding programs, but iTunes drops the volume a dB or two when it encodes. I think it's trying to prevent clipping in hot mastered recordings as it encodes.
That's partially what I meant. Sorry if my wording wasn't specific enough. Differences were perceived in specific parts, as well as overall for some songs with repetitive sections. Dull/muddy/harsh versus clear and punchy (in comparison), for example. I don't see how that would be caused by the things you listed.
However, if you go to much lower bit rates, there is more noticeable distortion everywhere.
I use dBpoweramp. (Forget iTunes!)