24-bit vs 16-bit at a common sample rate - difference?
Mar 8, 2011 at 10:43 AM Thread Starter Post #1 of 20

k00zk0

Head-Fier
Joined
Mar 7, 2007
Posts
90
Likes
13
I need this answer before I go on with some research. Since you're incredibly knowledgeable and have the systems to personally confirm this, please tell your knowledge of current technical research, and/or opinion.
 
24-bit, 44.1 kHz
-- vs --
16-bit, 44.1 kHz
 
Taking a studio master at 24-bit and dithering it to 16-bit. So, the same audio.
 
As far as I understand, the DAC can perfectly reconstruct the waveform regardless. Or, are there differences? Surely there will be some if you check with an oscilloscope, but if a peak or waveform height is 1/128000th of the signal maximum off of where it was on the 24-bit master, is it audible or does it contribute distortion if the DAC reconstructs it as such?
 
Mar 8, 2011 at 10:55 AM Post #2 of 20
There is a recent thread on this form that touches on this - do a search for "Gizmodo" and it should pop right up.
 
My attempt to summarize the answer to your question:
 
1. There is no audible difference between 24-bit audio at a given sample rate and the same audio reduced to 16-bit audio with proper dithering applied.  Proper dithering is not hard to do nor rare to find. (It's my personal opinion that dithering isn't absolutely required in most real world situations, but that's for another discussion.)
 
2. There will be a measurable difference between 16-bit audio and 24-bit audio, if for no other reason that the former will be missing at least 8 bits from the latter. That's indisputable. Measurable differences and audible differences are two different things.
 
Mar 8, 2011 at 11:12 AM Post #3 of 20
The Gizmodo article on Apple is exactly what I am leading from.
 
Thank you, I was sure of that.
 
I am actually very surprised 8-bit (set in foobar output options) sounds as good as it does.
 
Correct me if I'm wrong, the only difference is added noise. There is no signal really lost, as magnitude over frequency is preserved for ALL sounds, unless their magnitude is below the noise floor that 8-bit creates, which would make it obscured by this hiss.
 
Just a side question; Why does setting dither to true, along with the 8-bit setting, sound much worse than no dithering? There is a high pitched hum and tinnyness to the audio. Is this just my specific setup? I assume my soundcard is dithering it better than foobar is doing?
 
Mar 8, 2011 at 11:56 AM Post #4 of 20
To your first question - correct. The effect of reducing the bit-depth (forget dithering completely for a second) will, all other things held equal, introduce some noise as a product of the quantization process. You can think of the resulting 16-bit signal as a sum of the original (24-bit) signal and a small signal representing the small bits of error from whenever the original signal had to round up or round down from a 24-bit value to a 16-bit value. So yes, all you're doing is adding noise. Without dither, this noise will be somewhat irregular, so dithering is introduced to "blur" the error and make it theoretically less audible. To use a horrible analogy, if you smell brownies once a month, the smell will be stronger than after you get used to living in a brownie factory.
 
The two other things to consider, in my opinion, are 1) could one ever hear this error at normal listening levels and 2) might the source material ALREADY have enough noise or randomness that this noise is already going to be masked or obliterated by more pronounced noise from tape hiss, microphone preamps, etc.?  I think those two considerations pretty much eliminate any possible advantage of LISTENING to audio at 24-bits.  When you're TRACKING audio, looking ahead to hours of digital zooming, mangling, effecting, cutting, pasting, etc., then it's a different story.
 
As far as the foobar thing goes, I'm afraid I have no idea.  Others might know better there.
 
Mar 8, 2011 at 12:18 PM Post #5 of 20
Quote:
Measurable differences and audible differences are two different things.

Exactly. And even more important, which almost nobody ever mentions, is the noise floor of most recordings is much louder than the inherent noise floor of "only" 16 bits. Stick a microphone in a normal room, or even a quiet professional recording studio. Then adjust the preamp gain to a level suitable for an acoustic guitar or singer or some other typical acoustic source. Then read the VU meter when nobody is singing or playing. You'll be lucky to have 60 dB s/n and really lucky to have 70. Versus 96 dB s/n for 16 bits.
 
If I can editorialize even further
biggrin.gif
, it cracks me up when people embrace the old days of analog tape, which is typically worse than 70 dB s/n = about 13 bits at best, but for some reason 16 bits for digital recording is not enough.
 
--Ethan
 
Mar 8, 2011 at 1:41 PM Post #6 of 20
I largely agree with the answers posted above. I would add, however, if you're going to be doing any processing of the audio (versus simply sending the original bit stream to the DAC for listening), then 24 bit can have an audible advantage in certain circumstances.
 
For example if you have say two versions of a classical recording with a wide dynamic range--one in 16 bit and the other in 24 bit. And say one of the tracks is very soft and you wanted to normalize or "volume level" just that one soft track to put it on your iPod. That might require 15+ dB of gain be applied. In such an instance, the 24 bit recording may yield an audibly better result as it may have a lower noise floor.
 
But, with many recordings, the actual noise floor of the recording is usually the limiting factor. This includes the microphone preamps, the A/D converters, any mixers used, signal processing software or equipment, etc. And there's also the inherent noise in the playback signal path and ambient environment. Headphones that offer significant isolation are especially relentless at revealing noise as they can lower the ambient background noise by 20+ dB.
 
Increasingly, music is being recorded with a very short and limited analog signal path. The microphones (and/or musical instruments) are often connected to high quality A/D converters and each is recorded digitally to it's own track--typically in 24/96 format. All the mixing and signal processing is then done using software like Sound Forge, Pro Tools, Wavelab, etc. And those tools typically use 32 or 64 bit  internal processing to avoid degradation, rounding errors, etc.
 
So it's certainly possible to produce recordings with a really low noise floor. And, if you crank in enough gain at certain points along the way, 16 bits might not be enough to prevent audible noise. But will 99% of Apple's 24 bit files have an audible advantage? I doubt it.
 
The interesting thing, to me, will be if Apple uses DRM and/or some proprietary Apple-only format, for their 24 bit tracks. If they do, it will be more difficult to do a blind comparison than simply using something like Foobar and ABX. Otherwise, with an open format, it would be trivial to verify if they have an audible advantage. And, so far, most blind listening tests of 16 vs 24 bit have found no advantage to 24 bit.
 
Mar 8, 2011 at 2:53 PM Post #7 of 20
One thing I'd like to take from this is the viability of a 20-bit audio format. Wouldn't that be right in the middle of both worlds; file size, as well as the extra least-significants to allow equalization without rounding, dynamic range, and/or lower noise floor? Since 20 bits worth of audible quality is the real world limit of the best systems anyways? Or is that 20 bits of measurable quality after electrical noise of the components that get it there, assuming a perfect, digitally synthesized 24-bit original waveform? (would only strengthen the point :p )
 
Does any file format allow 20 bit samples without padding the final four with 0's? To save space, of course. It would maybe strangely stuff six samples into the space of five 24's, but the decoder would know to look for this and read it correctly. Can't be hard to implement, especially in a semi-closed format like Apple would do, right?
 
Mar 8, 2011 at 3:04 PM Post #8 of 20
@k00zk0 You are correct 20 bits is a sweet spot for real world dynamic range. The problem is pretty much *all* digital devices have to store and manipulate things in increments of at least 8 bits. So once you go past 16 bits, you might as well go to 24 bits. You could, in theory, pack the "leftover" 4 bit portion (called--I'm not joking--a "nibble" in reference to being a portion of a "byte") into a more conventional data "word". And that would reduce uncompressed file sizes by about 16%. But in many ways it's easier to just tell FLAC, for example, to ignore the least important 4 bits when it does compression. Then the file can be expanded back to a 24 bit PCM format with the 4 lowest bits set to zero as you suggest. That would be a far more compatible solution with the same 16% size savings.
 
I haven't looked into what's already been done to avoid storing "useless bits" with lossless compression. But lossy compression (like MP3, AAC, etc.) already takes that into account.
 
EDIT: The link below is an interesting and useful article, but I can't agree with their statement: "if you’re an audiophile with excellent gear and a trained ear, you should notice the bump up in quality". I think blind tests have proven their statement wrong many times. But I also expect the more "mainstream" media to take similar positions not wanting to offend the 800 pound Gorilla in the room (Apple).
 
http://www.tested.com/news/the-real-differences-between-16-bit-and-24-bit-audio/1905/
 
Mar 8, 2011 at 3:11 PM Post #9 of 20
read some on DVD-A standards - many bit depth/sample rate/channel options are allowed - and MLP can give up to 50% lossless compression
 
http://en.wikipedia.org/wiki/DVD-Audio
 
 
its not the standards - its the industry and perception of the market, consumer demand
 
Mar 9, 2011 at 3:36 AM Post #10 of 20
There's no difference that you can hear between 2 tracks at normal listening level, and the added white noise brings you no harm. You can actually hear the amount of noise that is less in a 24 bit track but that would be the last thing you could hear in this world.
 
Apr 19, 2011 at 6:55 AM Post #12 of 20
So folks who rip their old/new vinyl records and then "resample and dither to 24/96" are mostly just wasting space?
 
Meaning they could have ripped to redbook 16bit/44.1kHz and it would pretty much sound the same?
 
 
 
Apr 19, 2011 at 10:00 AM Post #13 of 20
well the lower 8 bits of 24 are clearly "wasted" with ADC of vinyl after RIAA and normalizing - certainly if you've applied noise shaped dither during the normalizing/truncation - caputuring the RIAA analog out at "24 bits" does allow for headroom in the recording process
 
similar considerations apply even to analog master tape as the source
 
Redbook CD 44.1 KHz sample rate is arguably "too low" for outside estimates of human audio perception - some number of preteen girls can hear a little above 20KHz
 
but so far DBT tests haven't shown adults can tell a difference with higher sample rates and music signals - so 16/44 is probably "good enough"
 
but 16/96 (or 88) would be the better engineering choice if you want to sell a "hi rez" format with lower bit rate - the extra bandwidth allows huge noise shaping gain for extending the audio frequency dynamic range, and covers extreme estimates of human hearing limits - another instance of poorly understood numbers being siezed on for marketing
 
http://media.meridian-audio.com/datasheets/papers/Coding2.PDF
 
from Adam's paper it looks like he would recommend 16/64 for ~ the same bit rate as 24/44 but with better weighted frequency and dynamic range "overbounding" of human auditory capabilities - I also understand that noise shaped dither has advanced since his paper so the audio frequency dynamic range wiould be more than adequate
 
Apr 19, 2011 at 4:56 PM Post #15 of 20
Ah, so 24 bit may have its place when it comes to capturing the analog out. After capturing and any click/pop removal, then converting to 16 bits would mean no audible difference between 24 bits at that point I take it.
Meaning there really is no audible advantage for 24 bit playback.
 

Users who are viewing this thread

Back
Top