With shaped dither that's 200 dB of dynamic range! Rocket lauches and heartbeats at natural levels in the same recording! With efficient horn speakers all it takes is a few gigawatts of power to play such recordings and kill humans and animals within 100 yards. Yeah, definitelty 32 bit music is needed!
It's just silly how people who make these outlandish claims that anything beyond CD quality makes an audible difference can never prove it.
In theory 16 bits is enough, but what about real world? Bad 16bit transfer from 24bit master, due to incompetence or intentionally, to get HD versions sold. Anyone done blind tests or analyses? Edit: Sorry looks like this is discussed here already.
to go from a 24bit file to a 16bit file you have nothing to do but decide if and which type of dither you wish to use(a choice that under most conditions you will not notice as sounding any different). anytime a hires version is made to sound different from the CD release, of course it is intentional. be it malpractice or simply that they created a different master because somebody asked for it.
32 bit probably wouldn't need dither - around 194 dB peak is the maximum an undistorted sound wave can theoretically have in the Earth's atmosphere, and 32 bit already gives you 192 dB - not counting intersample peaks.
You can also puzzle out the numbers and see that 16 is plenty. With dither, a CD can do 90dB of dynamic range. Your listening conditions probably have a noise floor of above 30dB, so to hear the full range of a CD, you would have to boost the level of the quietest sound above that noise floor, bringing the peaks to at least 120dB. Coincidentally, 120dB is the threshold of pain and listening to sound that loud can cause hearing damage. I truth, 12 bit sound is probably enough. For more info see the article in my sig called CD Sound Is All You Need.
1. Not plenty enough for those who think more is (always) better. 16 bits is plenty for those who understand digital audio or believe those who know this stuff. 2. TPDF dither gives 95 dB of theoretical dynamic range, 3 dB less than just truncation error without dither (98 dB), but for sacrifying 3 dB of the dynamic range we get rid of distortion in the signal (we have total linearity). Using shaped dither we can have 20 dB (!) more perceptual dynamic range. Signals decay into the noise floor the way they do in analog audio until completely masked by the dither noise (but of course you need CRAZY volume settings to hear quiet things at signal level -110…-120 dBFS). 3. Noise floor at 30 dB and peaks of music at 110 dB or less (sane listening that doesn't make you lose your hearing) means 80 dB or less of dynamic range needed. That translates into 13 bits, but for "DR6 pop" of today 8 bits with shaped dither would be just fine! By comparison, vinyl audio has "10 bits" worth of dynamic range at best.
Ian Sheppard has posted a very good video, consistent with the OP. His demonstration that the only difference between dithered 8bits and 24bits is noise, by reversing polarity, is pure genius.
To be fair, that reverse polarity test (called a "Null Test") is the first difference test taught to new audio engineering students and is used by almost all professional engineers on an almost daily basis. I've advocated it's use on numerous occasions here on head-fi, it's quick, easy, completely reliable, accurate, entirely objective and doesn't cost anything (using free software). It's hardly ever even mentioned in the audiophile world though and you're free to draw your own conclusions as to why! G
Yes but using the null test, along with music samples, as a simple demonstration that 8bits has identical resolution to 24bits gets that message across very effectively.
I purchased same song from itunes (MFiT) version, 24/96, 24/44 and 16/44 Flac from Qobuz, LAME MP3 from Google Play Music What I noticed is, the quality differs based on the equipment we use sound quality: With Bluetooth (Oneplus Wireless 2 - Aptx HD) 24/44 > 16/44 > MFiT > 24/96 > MP3 With Brainwavz B200 + iBasso DC01 + Comply Audio Pro 24/44 > 24/96 > 16/44 > MFiT > MP3 In all cases, I feel 24/44 sounds better than even 24/96. Don't know why. 24/96 sounds as if it has some noise at higher frequencies (not clear)
1. We have to be careful with statements like this. Have you ruled out the other possibilities? For example, are you certain they're all exactly the same master? As MFiT means "Mastered for iTunes", it's very possible they're slightly different masters. 1a. With the Oneplus, you're not really comparing 24/44, 16/44, 24/96 and MP3 you're comparing a lossy, 576kbps codec derived from those original sample rates/bit depths. The AptxHD codec should be entirely transparent but again, we need to rule out the other possibilities before we can state that quality differences are due to something else (equipment differences). Having mentioned these possibilities, it's most probable there is an audible difference between the equipment. It's unlikely the two different IEMs have the same frequency response and those differences are almost certain to be within the threshold of audibility. Additionally, it's also likely that your IEMs have different sensitivity and therefore the difference may not be a difference in quality but just a difference in volume. 2. It's possible there is some ultrasonic content (>21kHz) in the 24/96 version that is causing IMD (Inter-Modulation Distortion) in your amp sections or headphones (which obviously doesn't exist in the 24/44 version). It's also possible there is no audible difference and that what "you feel" is just a trick of your perception. There are other possibilities as well though, for example: Slightly different masters again, a slight volume change in the conversion process or even, that a resampling filter has been chosen that starts rolling-off at a relatively low frequency. G