24bit vs 16bit, the myth exploded!
Aug 20, 2011 at 7:01 AM Post #752 of 7,175


Quote:
Computers work not with single bits but with 8-bit words. Hence every increase would be an 8 bit increase.



This is technically not a true statement; the size of a byte (your "8-bit word") is hardware-dependent, and there exists no standard that specifically defines a byte as 8 bits long. 8-bit bytes just make a lot of computations and the like really easy (because powers of two) compared to the alternatives. Back in Ye Olden Dayes of the 50s and 60s, 6-bit words and 7-bit words existed. IBM made the 8-bit word hecka popular though, with its System/360 architecture and 8-bit microprocessors.
 
When you want to unambiguously specify an 8-bit word, you say "octet" (because, like I said before, there's no actual standard that specifically defines a word/byte as 8 bits long).
 
-- Griffinhart
 
Aug 21, 2011 at 1:14 AM Post #753 of 7,175


Quote:
Quote:

Good questions! You are correct in your assumption that you can't hear the noise and 16bit is already overkill, which was the whole point of my original post, to dispel the market hype about 24bit. Even in a listening environment equivalent to a top class recording studio, the dither noise of 16bit will be below the noise floor of the playback equipment and/or the noise floor in the listening environment. The only error in your reasoning is the consequence of not using dither. Dither is a mathematical process which randomises all the quantisation error, the result is a perfectly encoded (and reconstructed) waveform plus some noise (which is the result of the randomised quantisation error). If we don't apply dither the consequence is potentially correlated (non-randomised) error, which is much more noticeable and unpleasant.
 
The only thing I'd take issue with is that I personally would say that using a grenade to kill a fly is not "equally" but slightly more stupid than using a bat!
beyersmile.png
Generally a quite good analogy and understanding of the situation though. I would not say that 12bit was quite enough, although you theoretically only need 10 bits to encode a 60dB dynamic range. The overkill of 16bit ensures that the dither noise is below the noise of your playback environment and therefore you can forget all about it. Dither noise using 12bit would probably add enough to the noise of a good sound system to be perceivable even at normal hearing levels.
 
Hope this helps,
 
G


Thanks for responding Gregorio, but the real question that I was trying to bring up is "sort of" answered by your last statement. Are you stating that people can perceive the difference between a 16bit recording with and without dither? If this is the case, then doesn't that imply that we do in fact perceive up to the full 16 bit dynamic range?
 
On a side note, I wonder if other folks have started to use their 24 bit audio systems for data recording. I've used audio amplifiers and simple diy DACs for my own measurements in my lab, and now I'm very curious to see what kind of results I get with a 24 bit audio dac versus a 24 bit data acquisition dac, perhaps there are additional signal processes in the 24 bit audio dac not found in a DAQ card. 
 
 
Aug 21, 2011 at 1:31 AM Post #754 of 7,175
I heard a 32 bit DAC will make 16 and 24 bit recordings sound better, does anyone have any deep technical insight on this? I'm talking about the latest (and long postponed/awaited) Fostex headphone amps in Japan, they use some Asahi Kasei DAC Chip.
 
 
 
Aug 21, 2011 at 1:38 AM Post #755 of 7,175
Quote:
I heard a 32 bit DAC will make 16 and 24 bit recordings sound better, does anyone have any deep technical insight on this? I'm talking about the latest (and long postponed/awaited) Fostex headphone amps in Japan, they use some Asahi Kasei DAC Chip.


24 bit doesn't make 16 bit sound better. I don't see why 32 bit will make anything betterer.
 
Unless you listen to your music at 192dB. If so, a 32 bit DAC might be for you!
 
Aug 21, 2011 at 2:11 AM Post #757 of 7,175
Quote:
Yes, I see, however I'm curious why does the 32 bit chip exist in that case, and why are Fostex using it?


Marketing purposes, probably. ASUS is using a 32 bit chip in their new DAC. There's a DAC somwhere which supports up to 384kHz sampling rate. Slap a bigger number on it and audiophiles will flip out, even if there's no recordings that would actually use those extra bits.
 
Aug 21, 2011 at 4:41 AM Post #759 of 7,175
 
Quote:
Thanks for responding Gregorio, but the real question that I was trying to bring up is "sort of" answered by your last statement. Are you stating that people can perceive the difference between a 16bit recording with and without dither? If this is the case, then doesn't that imply that we do in fact perceive up to the full 16 bit dynamic range?
 
On a side note, I wonder if other folks have started to use their 24 bit audio systems for data recording. I've used audio amplifiers and simple diy DACs for my own measurements in my lab, and now I'm very curious to see what kind of results I get with a 24 bit audio dac versus a 24 bit data acquisition dac, perhaps there are additional signal processes in the 24 bit audio dac not found in a DAQ card.


Sorry, I thought I explained this when I said "If we don't apply dither the consequence is potentially correlated (non-randomised) error, which is much more noticeable and unpleasant." In other words, correlated error results in unwanted frequencies or tones in the music but probably not related to the musical content. It's probable that these correlated errors are higher in amplitude (louder) than the dither. Chances are that even with a very high end system correlated errors will be near the limits of hearing at normal listening levels but we use dither just to play it safe. Another fact: When we record music there is usually a great deal more noise in the recording chain than is created by dither, so any dither noise in the finished product should be masked and unrecognisable. This may not be true of correlated errors.
 
I don't know what the difference is between a 24bit audio DAC and 24 bit data acquisition DAC, what is a 24bit data acquisition DAC?
 
G
 
Aug 21, 2011 at 2:52 PM Post #761 of 7,175
High bitrate is like uncompressed files in the sense that it isn't about what you can hear, it's about the peace of mind bigger filesizes give people with OCD. You could double the filesize again with pointless filler bits and there would still be people to buy it "just in case".
 
Aug 21, 2011 at 3:37 PM Post #762 of 7,175
I heard a 32 bit DAC will make 16 and 24 bit recordings sound better, does anyone have any deep technical insight on this? I'm talking about the latest (and long postponed/awaited) Fostex headphone amps in Japan, they use some Asahi Kasei DAC Chip.
 
 


A couple of things to note. First 32bit, now I don't know if they are 32bit integer or 32bit float, my guess would be the latter. If you didn't know, there is very little resolution difference between 32bit float and 24bit integer. 8 bits of a 32bit float "word" are reserved to specify the position of the floating point (exponent), leaving a 24bit mantissa which stores the actual value, hence, pretty much the same level of accuracy as a 24bit integer file. Second, when ever we perform a digital process on a 24bit file (EQ, compression, reverb, etc. etc.) we are performing a mathematical calculation which may not provide and integer result, this result is truncated or rounded, producing a small error. Not really a problem down at the -144dB level but when you are mixing music you may have a 100 or more such processes across 50 or more tracks. Summing all these errors together on mix down may cause problems, so these days professional processing is usually done at 32bit float, 48bit integer or sometimes even 64bit float. Any errors are so minute you can sum many hundreds together and hear nothing. So, these very high bit rates are useful professionally when mixing but for playback, 32bit is just as pointless 24bit.

One last point: 32bit integer should in theory provide 192dB dynamic range, have a look at the S/N ratio output of any so called 32bit DAC, what do the specs say; 110dB, 120dB maybe even 130dB? Hang on though, even at 130dB what's happened to the other 62dB? That represents over 10bits of data, just gone, what's happened to it? It's just noise! Even the very finest DACs out there cannot re-produce more than 22bits worth of dynamic range and this is not going to change until someone can come along and change the laws of physics. Literally, we have just about reached the absolute limit of the laws of physics and incidentally, exceeded what a human being can safely listen to by a factor of roughly 1000 (at 22bits)! Virtually no recording ever released exceeds a dynamic range of about 60-70dB.

So why a DAC with 32bits. 32bit float processors were the main format of the computing world for many years, I'm guessing here but it maybe that 32bit float processors are cheaper or no more expensive to manufacture than a 24bit integer processors, giving a marketing advantage at no or reduced cost. Whether it's cheaper to manufacture or not, 32bit DACs can be sold for higher prices because consumers can easily be convinced that 32bit is better than 24bit!

G
 
Aug 21, 2011 at 5:18 PM Post #763 of 7,175
There's a DAC somwhere which supports up to 384kHz sampling rate. Slap a bigger number on it and audiophiles will flip out, even if there's no recordings that would actually use those extra bits.


It's worse than that I'm afraid, it not just a case of recordings being able to use those bits or extra bandwidth theoretically provided by 384kHz sampling rates. I only discussed sampling rates a little much earlier in this thread but we never really considered the whole thing together, sampling rates and bit depth. Electronic engineering and signal processing these days quite often comes up against the laws of physics and there is an axiom; the larger the bandwidth the lower the accuracy. In professional Analog to Digital Converters (ADCs) it's not uncommon to oversample by 256 or even 512 times, IE. A sample frequency of over 22mHz giving an audio bandwidth of over 11mHz but at this large bandwidth we are only capable of 6 or so bits of accuracy. It is possible to sample at over a gHz but the accuracy would obviously be proportionately lower. This isn't a problem which is going to go away. It's not a question of throwing more computing power at the problem, it's a limitation of the laws of physics. A great deal of ADC design is about trade offs!

So, what is the trade off or optimal bandwidth to accuracy ratio for music (as opposed to other signal processing, say in the telecom industry)? If we say that 24bits is the accuracy which we want (for recording) then the best evidence available at this point in time suggests a sampling rate of roughly 60-70kHz. Unfortunately, the industry has decided not to implement say a 65kHz sampling frequency so the best trade off would be 88.2kHz. 88.2kHz gives plenty of space for smooth and error free filtering (required by the Nyquist Theorem) and only a marginal and relatively inconsequential loss of accuracy. But this is not true of 192kHz and even less true of 384kHz. There is a price to pay for larger bandwidths and the larger the bandwidth the larger the price. The price is paid in non-linearity, in other words 192kHz isn't just about the futility of trying to record and reproduce irrelevant audio frequencies, it actually reduces the quality of those frequencies which you can hear! Yes, you are understanding correctly, 24/192 is actually poorer quality audio than say 24/96! For evidence of this please read the white paper published by Dan Lavry and confirmed last year by Benchmark (links on the previous page). The Lavry paper is quite technical but the Benchmark confirmation is written in a way everyone can understand. Lavry's paper was published in 2004, so it's not as if it's a recent discovery. That hasn't stopped the DAC manufacturers and some in the music industry using marketing to convince you that 24/192 is better than 24/94 and then charging you a premium for it! So, lower quality now seems to cost you a higher price, please don't get sucked in!!!!!

I want to make it clear, I am not saying that you won't hear a difference between a 24/96 and a 24/192 recording, it's possible you might, as there is more noise and distortion present in a 24/192 recording than there is in a 24/96 recording. It's obvious that many reviewers seem to prefer 192 sample rates, I can't say if this is because they are following the advertising revenue, believe 192 should sound better and therefore hear an improvement or whether they honestly prefer the sound of more noise/distortion. The bottom line though hasn't changed from when I started this thread. 16bit exceeds the resolution of playback systems (and your ears) and 70kHz sampling rate exceeds the bandwidth required to eliminate all artefacts. Unfortunately 16/70 format doesn't exist, neither does 16/88.2 or 16/96, so the best trade off that the industry allows us is 24/88.2 or 24/96. This represents the highest quality audio format which is currently available.

G
 
Aug 21, 2011 at 8:59 PM Post #765 of 7,175


Quote:
 

Sorry, I thought I explained this when I said "If we don't apply dither the consequence is potentially correlated (non-randomised) error, which is much more noticeable and unpleasant." In other words, correlated error results in unwanted frequencies or tones in the music but probably not related to the musical content. It's probable that these correlated errors are higher in amplitude (louder) than the dither. Chances are that even with a very high end system correlated errors will be near the limits of hearing at normal listening levels but we use dither just to play it safe. Another fact: When we record music there is usually a great deal more noise in the recording chain than is created by dither, so any dither noise in the finished product should be masked and unrecognisable. This may not be true of correlated errors.
 
I don't know what the difference is between a 24bit audio DAC and 24 bit data acquisition DAC, what is a 24bit data acquisition DAC?
 
G

 
Ah ok, thanks for going over the possible perceived errors(artifacts) versus the randomization caused by dither to remove these artifacts.
 
As far as data acquisition goes (DAQ), its basically the opposite of a DAC so instead of Digital => Analog, it would be Analog => Digital. Most of the gear I work with is limited to 16bits, so a 24 bit DAQ would be amazing. When processing signals, a DAC would still be very helpful because we do the intial A=>D conversion and then manipulate the data. As you stated there is a lot of noise in the chain to obtain the analog signals so we need to filter, modify, and use other techniques to get rid of parts of the data or change it to something meaningful.  We then output the data back to an analog signal so we still need to do a D=>A Conversion. At the moment that DA conversion is limited to 16 bits, going higher might be better. Since these signals are being perceived by instruments rather than human ears, the 24bit DAC would probably be put to good use. 
 

 
 
 

Users who are viewing this thread

Back
Top