as an effort to avoid pointless discussions(at least some), nobody is saying that one cannot or should not use hires files if he wants to. we all have some amount of freedom and can use it for such decisions. this topic is about bit depth and while in practice sample rate and bit depth go together in a number of ways, bringing sample rate into the conversation is only complicating things. so does comparing typical hires files lower resolution as whatever comes out is not specific to bit depth. we have other thread to go crazy over hires files in general. the first post is pretty clear about the intent of this topic, and the conversation should be about bit depth and the benefits(increasing fidelity of the encoding, pushing the quantization noise down, etc). and then wonder when those changes are audible for us or when they're expected to be audible for non mutated humans listening at sensible levels to correctly created audio albums. now about the subjective benefits, because we're in this section, we do expect statements of audibility to be backed up with supporting evidence. even more so when we happen to have tested this for ourselves under controlled conditions and have consistently failed to pass a blind test between 16 and 24bit with our favorite tracks at normal to loud listening levels. that makes us even more anxious to see evidence from those who say that they do notice a clear difference. maybe it's about hearing abilities, maybe it's about listening skills, maybe it's about the equipment. but maybe it's made up stuff in the mind of a listener who never bothered to test his hearing ability properly. we could really stop wasting so much time and efforts if that last possibility was cleared by the people themselves before coming here to spam their overconfident claims based on garbage testing methods. it is a fact that 16bit is more than necessary under most circumstances. the debate only concerns niche cases and those who say otherwise are wrong. that much has been well established by decades of trials and I'm still waiting to see a legitimate research suggesting otherwise. if I take my own listening habits and listening environments, 12bit dithered is all I seem to need, and most people seem to have a hard time hearing differences beyond 13 or 14bit while listening to music at non stupidly high levels. I would not say the same of 8bit or 6bits where I could clearly notice at least the background hiss when testing music at those values. it's clearly a matter of magnitude, and of course if the same track was adjusted to peak at -20dB instead of say 0dB, and I adjusted my listening level by +20dB to get the same typical listening level, that difference would have to be reported on the lowest bit depth I need for transparency. so the question becomes, how often does that happen? and for me the answer is never! it's not the case for everybody, but it is for me. I do have classical music with quiet passages that are stupidly quiet compared to the rest of the symphony. but the rest of it is way too loud for me to just crank up the volume of the entire piece based on that quietest part. which leaves me with 2 options: -I leave the volume knob where it usually is and I won't hear crap on the super quiet passage. <= usually what happens and why I stop listening to those particular albums. -I become a human compressor and keep increasing the gain on quiet passages, then rush to lower back down when it's loud again. <= I hate that because it means I will have to endure overly loud music for at least a moment. so again in the long run I would stop listening to those musics. conclusion my listening habits have no circumstances where I could audibly benefit from more than 16bit. my DAC measures better if I send 24bit signal to it for some reason, so I send 24bit padding to it from 16bit albums and everybody's happy. I pay for the cheaper files, I hear the same, my DAC measures pretty well. I'm objectively and subjectively satisfied. a different listener with different habits and priorities, may encounter moment when 12bit isn't transparent on a regular basis despite how it's ultra rare for me. but I would argue that very few people on very rare occasions end up with musical content sounding audibly different because it has more than 16bit. and I would argue that among those, probably more than half get sound differences that have nothing to do with having higher fidelity. instead it's often about the master being different or the playback gear doing some crap when fed with some particular resolution. the legitimate cases remaining, where audibility correlates with the quantization noise going down so bit depth is the relevant factor, I would be surprised if we can find a dozen on the entire forum. and I'm confident that all of them listen to music too loud, or created the circumstances to achieve audible difference(purposefully or by malpractice, like having the digital volume on the computer at -80dB and compensating with the amp or whatever). I'm very confident about that and after all those years hanging around audiophiles, I have yet to see one solid counter example. the hundreds or thousands of people who "know what they're hearing" under sighted conditions might contain such counter examples. I can't know that when those never demonstrate their abilities. to me they're no different from guys saying they have seen flying saucers from mars. some could be correct. but in the absence of proper demonstration, we all save time treating the all group as making stuff up. it's just the most pragmatic conclusion. if we consider that this is the "sound science" section, no scientific research would draw conclusions based on knowing a guy who claims he can do it. facts are demonstrated, they're not acts of faith. and as we happen to be on the web, taking random statements as facts without supporting evidence, that's just gullibility. of course we don't want to that, it's internet!