Head-Fi.org › Forums › Equipment Forums › Sound Science › 24bit vs 16bit, the myth exploded!
New Posts  All Forums:Forum Nav:

24bit vs 16bit, the myth exploded! - Page 2

post #16 of 1923
Quote:
Originally Posted by gregorio
I know that some people are going to say this is all rubbish, and that “I can easily hear the difference between a 16bit commercial recording and a 24bit Hi-Rez version”.
Ya think..?
post #17 of 1923
Very interesting thanks. I use R2R DACs or Vinyl so guess I'm safe...phew!
post #18 of 1923
Quote:
Originally Posted by gregorio View Post
Partly true. These days most processing occurs in the digital domain, using plug in compressors, etc. Where outboard gear is used, it obviously depends on the studio, a world class studio will of course use the very best compressors/EQ and they will certainly be Class A.
but what about my treasured beatles collection? that era of music - what tech was used back then? what was 'average' and what was 'world class' ?

my collection is more of stuff that existed in the analog days and very little that I listen to comes from 'today'. when I hear the multiple levels of hiss that sometimes accompany the start of a song, I realize that the gear I have now is way better than the stuff they used to CREATE it on, in the first place. my noise floor is below theirs!

so my point is, no amount of worrying if a dac is 16bit or 24bit or even if the analog was copied IN 16bit or 24bit format - the original is still 'lossy as hell' compared to even mid-fi op amp specs of today.

and what exactly does getting 'better resolution' buy you? on a hissy distorted (by today's standards) source will sound just as bad, but just in higher resolution so you can hear *more* of the hiss and noise and distortion (lol).

too much worrying about 'the last mile', imho. your playback gear is almost always better than the combo of what was finally mixed and released. certainly true for older material.
post #19 of 1923
Thread Starter 
Quote:
Originally Posted by mark_h View Post
Very interesting thanks. I use R2R DACs or Vinyl so guess I'm safe...phew!
Using 24bit or even using an upsampling DAC and going from 16bit to 24bit is not likely to be detrimental, it's just that there won't be any benefit either.

Going the other way, 24bit to 16bit could be quite detrimental unless a good quality dither is used as part of the process. Most consumer programs will truncate when going from 24 to 16bit. In other words, the last 8bits are just hacked off. Truncation is not good, it introduces quantisation distortion which is correlated to the audio material and it's results are unpredictable. It could mean that you get unwanted tones or harmonics in the mix which may be noticeable. Some consumer programs 'round' the result, still not good but better than truncation. The effects of rounding are unlikely to be heard by most people but the chances are that some audiophiles would notice. Dither is the only real option if you are serious about SQ.

G
post #20 of 1923
Really interesting! **bookmarked**
post #21 of 1923
Thread Starter 
Quote:
Originally Posted by linuxworks View Post
but what about my treasured beatles collection? that era of music - what tech was used back then? what was 'average' and what was 'world class' ?

my collection is more of stuff that existed in the analog days and very little that I listen to comes from 'today'. when I hear the multiple levels of hiss that sometimes accompany the start of a song, I realize that the gear I have now is way better than the stuff they used to CREATE it on, in the first place. my noise floor is below theirs!

so my point is, no amount of worrying if a dac is 16bit or 24bit or even if the analog was copied IN 16bit or 24bit format - the original is still 'lossy as hell' compared to even mid-fi op amp specs of today.

and what exactly does getting 'better resolution' buy you? on a hissy distorted (by today's standards) source will sound just as bad, but just in higher resolution so you can hear *more* of the hiss and noise and distortion (lol).

too much worrying about 'the last mile', imho. your playback gear is almost always better than the combo of what was finally mixed and released. certainly true for older material.
True to an extent. A fair bit of the Beatles stuff was done at Abbey Road Studios, which was and is still one of the best studios in the world. Abbey Road is a multi-million pound facility which uses the very highest quality audio equipment on the market and is often used as a test bed for the very high end manufacturers, Neve for example. One of the Beatles albums (think it was Sargent Pepper) is about the first example of multitrack recording being used for a commercial product. Having said all this, was the SQ possible in Abbey Road Studios in the '60s as good as modern replay systems? The answer in my opinion is probably not that far off a good system today! However, unless you can get hold of the original master (fat chance!) the copies available are likely to sound quite weak compared to modern recordings.

If transferring a vinyl to digital, there may be some merit in using 24bit, for the same reason as using 24bit when recording in a studio. 24bit gives you tons of headroom as usually you try to set peak levels in 24bit to -18dB (-22dB is also a good peak level). This gives you plenty of space for any transient 'overs' which you might get using 16bit. You can always dither back down to 16bit when you're done or leave it at 24bit if storage space is not an issue.

G
post #22 of 1923
Quote:
Originally Posted by gregorio View Post
It seems to me that there is a lot of misunderstanding regarding what bit depth is and how it works in digital audio.
This is a great post. Nice to see these concepts explained in a way that I can understand easily.

Quote:
Originally Posted by gregorio View Post
I know that some people are going to say this is all rubbish, and that “I can easily hear the difference between a 16bit commercial recording and a 24bit Hi-Rez version”.
It is my understanding that some (many?) recordings that are released in both 16 bit and 24 bit are not the same. The 24 bit release may have been produced and/or mastered differently. Also, there may be differences in the digital to audio conversion process within the end-users gear to consider. So it may well be true that people hear differences, but it will not be due to bit depth alone.
post #23 of 1923
Thread Starter 
Quote:
Originally Posted by lamikeith View Post
It is my understanding that some (many?) recordings that are released in both 16 bit and 24 bit are not the same. The 24 bit release may have been produced and/or mastered differently. Also, there may be differences in the digital to audio conversion process within the end-users gear to consider. So it may well be true that people hear differences, but it will not be due to bit depth alone.
It is possible under certain circumstances that a difference could be heard. This is because 24bit releases also often have a higher sample frequency (say 96kFs/s). With certain equipment and certain music it may be possible to notice a marginal difference with the higher sample frequency. The potential difference is mainly in the ADC, where a relatively cheap ADC has been used which possibly has a poor implementation of the anti-alias filter. Upping the sample frequency in this case uses a much smoother anti-alias filter with fewer artefacts and it maybe possible, with very good hearing and a good system, to hear the effects of a better implemented filter at 96kFs/s. However, if the music was recorded with a high end professional ADC, the filters at 44.1kFs/s are generally much better implemented and then telling 44.1k from 96k is much more difficult (read impossible) regardless of equipment and hearing ability.

AFAIK, there has never been a DBT between 16bit and 24bit (under controlled conditions) using the same sample frequency, where anyone has been able to tell the difference with any more accuracy than would be expected from chance.

Also, what you said about the 24bit and 16bit releases is entirely true. There is no way of knowing the processes that each mix has gone through or even by who has done it and with what degree of care. EG. The Studio, the Mastering Engineer, The Record label even an assistant in one of these businesses who is effectively stealing!

G
post #24 of 1923
Quote:
Originally Posted by mark_h View Post
Very interesting thanks. I use R2R DACs or Vinyl so guess I'm safe...phew!
Your never safe from the recording engineer, especially in today's music.
post #25 of 1923
My bad!....There goes the SACD myth down the drain....We just saved a whole bunch of money...
post #26 of 1923
Quote:
Originally Posted by manaox2 View Post
Your never safe from the recording engineer, especially in today's music.
NO one is safe from the recording engineer. his chief weapon is surprise...surprise and fear...fear and surprise.... his two weapons are fear and surprise...and ruthless efficiency.... his *three* weapons are fear, surprise, and ruthless efficiency...and an almost fanatical devotion to removing all impurities from the signal chain.

post #27 of 1923
I don't know anything about anything, so I am not criticizing the initial arguement, but I don't follow it so much either.

I'm used to working with Volts. So say you have a 16-bit ADC with a reference voltage and your rail-to-rail voltage is 0.5 to 4.5V. Then each bit of the 2^16 possible combination represents a 4V/2^16 value in volts (in this case 6.1e-5 volts). If you use the same rail-to-rail range with a 24-bit ADC, each bit represents 4/2^24 volts or 2.38e-7 volts (with the same sample clock).

It seems like the original post is stating that each bit can only represented a fixed amount (in my case volts), and what increases by going from a 16-bit to a 24-bit ADC is they rail-to-rail measurable voltage. While it is true that this is possible, it is also possible measure the same voltage swing with increased resolution. I believe I am misunderstanding the original post.

Thanks.
post #28 of 1923
Thread Starter 
Quote:
Originally Posted by linuxworks View Post
NO one is safe from the recording engineer. his chief weapon is surprise...surprise and fear...fear and surprise.... his two weapons are fear and surprise...and ruthless efficiency.... his *three* weapons are fear, surprise, and ruthless efficiency...and an almost fanatical devotion to removing all impurities from the signal chain.
And now for something completely different

Quote:
Originally Posted by geremy View Post
I don't know anything about anything, so I am not criticizing the initial arguement, but I don't follow it so much either.

I'm used to working with Volts. So say you have a 16-bit ADC with a reference voltage and your rail-to-rail voltage is 0.5 to 4.5V. Then each bit of the 2^16 possible combination represents a 4V/2^16 value in volts (in this case 6.1e-5 volts). If you use the same rail-to-rail range with a 24-bit ADC, each bit represents 4/2^24 volts or 2.38e-7 volts (with the same sample clock).

It seems like the original post is stating that each bit can only represented a fixed amount (in my case volts), and what increases by going from a 16-bit to a 24-bit ADC is they rail-to-rail measurable voltage. While it is true that this is possible, it is also possible measure the same voltage swing with increased resolution. I believe I am misunderstanding the original post.

Thanks.
I'm not sure I fully understand your question. The actual window of voltage variations represented by the 16bit (or 24bit window) is not fixed. For example, many ADCs are calibrated so that 0dBu which equals 0.775v (line level) registers as -18dBFS (dBFS meaning the digital Full Scale where 0dBFS is the maximum value of all bits set to 1). However, some systems are calibrated to 0dBv = -14dBFS and in film it's often set to -20 or -22dBFS.

In theory, the 144dB dynamic range of 24bit allows us to quantise down to just a few nano-volts! However, at this level we are talking about the noise level generated by a single resistor. So in practice, many of the LSBs (Least Significant Bits) when recording in 24bit contain just system noise. In other words, the theoretical noise floor of a 24bit digital system is far in excess of what is actually possible in the real world of noisy electronics. That is why 24bit DACs are not able to actually resolve 24bits of dynamic range. If you read an earlier post I mention that some ADCs self dither, this is because the random noise generated by even the finest grade electronics is easily enough to cause the dithering effect.

I'm not sure this has answered your question?

For those not used to thinking in decibels scroll down this page for some examples: http://www.jimprice.com/prosound/db.htm

A very rough way to think of it; if the maximum level of a digital system were set at the sound of a truck going by from 10ft away then 144dB quieter (in 24bit) would be roughly the level of noise produced from two hydrogen atoms colliding!!

G
post #29 of 1923
Yes you answered my question. I was confused about the voltage range of the input. I don't mean to say that the ADC fixed any range of voltage, more along the lines of "the signal we are trying to measure is between X and Y volts". Thanks.
post #30 of 1923
Thread Starter 
Quote:
Originally Posted by fjf View Post
My bad!....There goes the SACD myth down the drain....We just saved a whole bunch of money
Mmmm, maybe. The technology used on SACD is closely related to digital audio as found on CDs and DVDs but is different. PCM (Pulse Code Modulation) is what is used in 16bit and 24bit digital audio and is what I have discussed in this thread. DSD (Direct Stream Digital) is the technology used on SACD. Basically this technology uses a bit depth of 1 bit but very high sample rates in the megahertz range (2.82mFs/s to be exact). In this sense DSD is very similar to PCM during the initial stages of A to D conversion.

There are both theoretical advantages and disadvantages of DSD over CD and the professional audio world is largely undecided about which is better. In practice though SACD usually sounds better than CD. This probably isn't due to DSD being better but for other reasons:

1. DSD technology is relatively expensive so only the higher class studios are capable of creating DSD based recordings.

2. SACD players are relatively expensive and generally only brought by those consumers really serious about sound quality.

3. Baring in mind 1 & 2 above, the quality of recording, production and mastering tends to be much higher on SACD releases because the recording industry realises that SACD consumers generally have a higher expectation of the sound quality.

I don't know how long the SACD format is going to survive but at this point in time SACD probably represents the highest audio quality currently available to the consumer.

Sorry if I've just cost you a "whole bunch of money"!

G
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Sound Science
Head-Fi.org › Forums › Equipment Forums › Sound Science › 24bit vs 16bit, the myth exploded!