Head-Fi.org › Forums › Equipment Forums › Sound Science › 24bit vs 16bit, the myth exploded!
New Posts  All Forums:Forum Nav:

24bit vs 16bit, the myth exploded! - Page 127

post #1891 of 1923

So, based on the last several responses it seems that:

 

  1. Listening to a 44.1/24 audio file is equivalent to listening to a 44.1/16 audio file, assuming both were properly created from the same master. This is regardless of the quality of DAC, amp and headphones/speakers.
  2. However, if one wants to change the audio waveform (e.g., DC repair, volume correction, equalization, yada yada) then doing this stuff on an up-sampled 24-bit version of a 16-bit file sounds like a good idea in order to minimize
  3. Leaving the file in 24-bit mode or downsampling back down to 16-bit is strictly a function of storage overhead and whether or not any additional editing will need to be performed on the file in the future.

 

I guess all of that makes sense!


Edited by SharpEars - 8/22/14 at 1:28pm
post #1892 of 1923
Quote:
Originally Posted by SharpEars View Post
 

OK, I am going to stick my face into the arena while the kicks are flying and say that I have a reason to prefer 24-bit over 16-bit. Here is why:

 

My workflow consists of taking a 44.1/16 CD and ripping it into FLAC files. I then take the FLAC file for each track and:

 

  1. Upsample to 24-bits
  2. Remove DC Offset, if present
  3. Reduce the volume until there are no clipped samples present
  4. Possibly do heuristic based automated clip repair on those samples that were clipped
  5. (optional) Normalize the volume to a reasonable level inline with the other tracks on the CD

 

I perform functions 2-5 in the 24-bit domain and I leave the result as a 24-bit (FLAC) audio file that is ready for listening. I do this with all of my music and I stick with 24-bit, because I believe that the steps above performed in the 24-bit domain make more sense. I leave the resultant file as a 24-bit file, so that I can perform additional steps in the future that I deem will improve the tracks sound quality (e.g., equalization, excitement, whatever).

 

Is my 24-bit workflow nonsense? Could all of this have been done at the 16-bit level with no possible way to tell the difference? Am I stupid in leaving the result at 24-bits and not downsampling it to 16, given that storage costs are negligible (e.g., 4 TB hard drive is $130)?

 

Let the thread experts speak!

I have a nit to pick here.

Quote:

Upsample to 24-bits

This makes zero sense.

 

Upsampling involves changing the sampling rate of the audio. All you are doing by converting 16-bit to 24-bit audio is zero padding eight 0's to the end of your 16-bit PCM words. 

 

My next question is this: why on earth do any of your CD's have a DC offset?  Let's say you have a 60 second sample with approximately 20 Hz content at 0dB full scale. your "DC offset" by chance can never be greater than that from a single half-cycle of this 20Hz info averaged over the length of your sample. In this example here, the DC offset of this sample is 1/(60*20*pi).

 

If you compare that to 16bit depth, that DC offset works itself out to be 8.3 counts of audio. That's 8 out of the 65536 possible values of 16 bit audio!

 

When you subtract the DC offest do you ever have audio clips with a bigger DC offest than 8 out of 65536? If so,

what CD's are these and who mastered them, because I would really like to avoid buying anything from them in

the future!

 

I guess my point is this: You aren't worried about the "DC offest" during the first 1/40th of a second at the start of a

20Hz signal, so why would you care otherwise? Do you really have tracks that you listen to with enough extreme low 

frequency content at significant amplitude such that there is appreciable "DC offset" at > O(1) sec? I guess I don't

believe that any commerically released recordings contain an actual DC offest, nor to I believe that any competent 

audio chain passes true DC content.

 

Again, 

Quote:
 ... leaving the result at 24-bits and not downsampling it to 16

You aren't "downsampling" here. You are downcoverting 24 bit to 16 bit. In this case, you would want to consider dithering between the steps.

 

Cheers


Edited by ab initio - 8/22/14 at 2:17pm
post #1893 of 1923
Quote:
Originally Posted by SharpEars View Post
 

How does 24-bit better 16-bit when it comes to DC offset neutralization as you mentioned in your reply?

 

DC offset is liable to be way down close to the noise floor of the recording. Clipping is up at the top, where 16 and 24 are identical.

post #1894 of 1923
Quote:
Originally Posted by ab initio View Post
 

I have a nit to pick here.

This makes zero sense.

 

Upsampling involves changing the sampling rate of the audio. All you are doing by converting 16-bit to 24-bit audio is zero padding eight 0's to the end of your 16-bit PCM words. 

 

My next question is this: why on earth do any of your CD's have a DC offset?  Let's say you have a 60 second sample with approximately 20 Hz content at 0dB full scale. your "DC offset" by chance can never be greater than that from a single half-cycle of this 20Hz info averaged over the length of your sample. In this example here, the DC offset of this sample is 1/(60*20*pi).

 

If you compare that to 16bit depth, that DC offset works itself out to be 8.3 counts of audio. That's 8 out of the 65536 possible values of 16 bit audio!

 

When you subtract the DC offest do you ever have audio clips with a bigger DC offest than 8 out of 65536? If so,

what CD's are these and who mastered them, because I would really like to avoid buying anything from them in

the future!

 

I guess my point is this: You aren't worried about the "DC offest" during the first 1/40th of a second at the start of a

20Hz signal, so why would you care otherwise? Do you really have tracks that you listen to with enough extreme low 

frequency content at significant amplitude such that there is appreciable "DC offset" at > O(1) sec? I guess I don't

believe that any commerically released recordings contain an actual DC offest, nor to I believe that any competent 

audio chain passes true DC content.

 

Again, 

You aren't "downsampling" here. You are downcoverting 24 bit to 16 bit. In this case, you would want to consider dithering between the steps.

 

Cheers

 

Before I start, I must apologize for using the term resample, I clearly meant up/down-convert the bit depth. There is no change in sample rate being discussed here.

 

First of all on the subject of commercially released CDs, you would be surprised at how many releases have a measurable DC offset on their tracks - entire tracks, not portions thereof. I don't know who is mastering this stuff or encoding the content onto CDs, but they are leaving a CD offset on the music. I will not call out such CDs by name, but needless to say that pop and electronic music are ripe with DC offsets. I will admit it is not high, perhaps within 0.25%, sometimes within 0.10%. However, as part of my workflow I remove it. Perhaps an offset this low makes no difference and no audible clicks will be heard at the start/end of the track due to DC offset, but I zero it out in any case.

 

Next, you should be far less surprised by the fact that many recordings these days have peak amplitudes on samples that result in clipping. I choose to lower the volume on these tracks so that there are no clipped samples. Certainly, no one will argue that (perhaps many) clipped samples may be audible, depending on how a particular DAC handles them as part of conversion to analog. Sometimes I lower the overall track volume further, especially if (perhaps due to compression) its overall loudness is obscene. It should come as no surprise that many popular recordings try to maximize the perceived volume as much as possible to sound loud, even at the expense of allowing clipping to happen. I try to undo some of this damage, within reason and opportunity. Sometimes, as I mentioned previously I try to bring tracks to a common reasonable average perceived volume.

 

Quote:
Originally Posted by bigshot View Post
 

 

DC offset is liable to be way down close to the noise floor of the recording. Clipping is up at the top, where 16 and 24 are identical.

 

 

I understand that clipping is up at the top, but all samples are being shifted by a decimal dB amount. There must be some error in doing this. I have always thought that the error can be minimized by working at a higher bit depth. Perhaps I am wrong.

 

Now on to the real question. Is there any value to performing my entire process in 24-bits, which involves multiple successive operations on the data, all of which introduce potential errors into the flow due to working with integers and rounding in the process? I don't know, that is primarily my question, does it matter or can multiple changes including possible equalization be done in 16-bits with no perceivable disadvantage to the 24-bit alternative. I am open to reasonable discussion on the topic that shows me the error of my ways.

 

I am an audiophile with very high quality solid state equipment and headphones, but I am not willing to bury my head in the sand when a reasonable argument is presented that shows me that some of the steps I take are of dubious value. I would just like to understand the logic and thought behind them.


Edited by SharpEars - 8/22/14 at 8:14pm
post #1895 of 1923
Quote:
Originally Posted by SharpEars View Post
 

Before I start, I must apologize for using the term resample, I clearly meant up/down-convert the bit depth. 

no too late BURNNNNNN!!!! ^_^ we all understood what you meant it wasn't much a problem. still ab initio was right to point it out and was less lazy than I was.

 

 

 

Quote:
Originally Posted by SharpEars View Post
 

...can multiple changes including possible equalization be done in 16-bits with no perceivable disadvantage to the 24-bit alternative.

only you can tell us. do one track with all your process in 16bit then in 24bit and then ABX the hell out of those.

post #1896 of 1923
Quote:
Originally Posted by castleofargh View Post
 

no too late BURNNNNNN!!!! ^_^ we all understood what you meant it wasn't much a problem. still ab initio was right to point it out and was less lazy than I was.

 

 

 

only you can tell us. do one track with all your process in 16bit then in 24bit and then ABX the hell out of those.

 

LOL! In all seriousness, I hate ABX tests and the other problem is that just because I fail at ABXing one track, there's no guarantee that on another track I won't pass. I'd have to ABX every single track I do to see whether this process affected it and I'd have to do it at the right volume level (i.e., 75 dB) to avoid fatigue, which is lower than my listening level. I was hoping for a technical explanation as to why it doesn't matter that convinces me I am wasting my time (up converting).

post #1897 of 1923

Can somebody explain in basic terms what DC offset is? I've seen the option in software but never knew what it was.

post #1898 of 1923
Quote:
Originally Posted by kraken2109 View Post
 

Can somebody explain in basic terms what DC offset is? I've seen the option in software but never knew what it was.

It means the wave's center is not fixed at the 0 level.  I would think it's a universal term used in electronics or audio samples as it has to do with vertical shift of the wave.  Just like what is taught in trig.  1+sinx will just be the wave shifted up 1 from 0.  1 is the offset.  I found this online.  The waveform on top is offset, and below, it's corrected.

 

post #1899 of 1923
Quote:
Originally Posted by SilverEars View Post
 

It means the wave's center is not fixed at the 0 level.  I would think it's a universal term used in electronics or audio samples as it has to do with vertical shift of the wave.  Just like what is taught in trig.  1+sinx will just be the wave shifted up 1 from 0.  1 is the offset.  I found this online.  The waveform on top is offset, and below, it's corrected.

 


Ah that makes sense, thanks

post #1900 of 1923

OK to ruffle some feathers, I want to ask a provocative question. Why do we need 16-bit audio, won't 8-bit audio make do? After all, if most recording utilize somewhere around 50 dB of dynamic range and given the ""noisy environments that even the quietest listening rooms are, having 48 dB with 8-bit audio should practically suffice. So, why do we need 16-bit at all - isn't it overkill?

 

For example, take the 8-bit vs 16-bit test at: http://www.audiocheck.net/blindtests_16vs8bit.php

 

By the way, even with my high-end equipment I failed the test :mad: (4/10).

 

I also learned that I can only hear noise down to -54 dBFS using this test: http://www.audiocheck.net/blindtests_dynamic.php?dyna=54

 

Equpment used for the test: PC USB (MME) -> OPPO HA-1 (DAC/Headphone Amp) -> Balanced headphone out to Sennheiser HD650 headphones via balanced cable

 

I set the volume level pretty high for the test.

 

I think I just ended my long career of being an audiophile...


Edited by SharpEars - 8/23/14 at 9:55am
post #1901 of 1923
This statement sums digital audio up for me.

"Remember, it has been 30 years since the introduction of the CD technology and we have yet to see credible evidence to demonstrate that well-digitized 16/44 isn't transparent beyond anecdotal opinions (just like there's no good evidence to demonstrate superiority of 24-bits or >44kHz sampling rates; assuming we're using a decent DAC playback system)."

Taken from this article.
http://archimago.blogspot.ca/2014/08/musings-pure-perfect-sound-forever.html?m=1
post #1902 of 1923
Quote:
Originally Posted by SharpEars View Post
 

I understand that clipping is up at the top, but all samples are being shifted by a decimal dB amount. There must be some error in doing this. I have always thought that the error can be minimized by working at a higher bit depth. 

 

The difference between 16 bit and 24 bit is way down near the depth of the noise floor. Up at the higher volumes, the sound is identical. Working at 24 bit wouldn't give any benefit to fixing clipping, but it would involve more data processing, increasing the chance of error. The reason that studios use 24 bit is because in a mix, they often have to bring the volume of a particular element in the mix up, and when they do that, they drag the noise floor up along with it. It's better for them to keep the sound clean beyond the range of hearing. But for playing back music at normal listening volume, it doesn't make any difference at all.

post #1903 of 1923
Quote:
Originally Posted by SharpEars View Post
 

OK to ruffle some feathers, I want to ask a provocative question. Why do we need 16-bit audio, won't 8-bit audio make do? After all, if most recording utilize somewhere around 50 dB of dynamic range and given the ""noisy environments that even the quietest listening rooms are, having 48 dB with 8-bit audio should practically suffice. So, why do we need 16-bit at all - isn't it overkill?

 

For example, take the 8-bit vs 16-bit test at: http://www.audiocheck.net/blindtests_16vs8bit.php

 

By the way, even with my high-end equipment I failed the test :mad: (4/10).

 

I also learned that I can only hear noise down to -54 dBFS using this test: http://www.audiocheck.net/blindtests_dynamic.php?dyna=54

 

Equpment used for the test: PC USB (MME) -> OPPO HA-1 (DAC/Headphone Amp) -> Balanced headphone out to Sennheiser HD650 headphones via balanced cable

 

I set the volume level pretty high for the test.

 

I think I just ended my long career of being an audiophile...


8bit would mean quantization errors around -48db (8*6) that would be as you yourself have tested in the audible (maybe with dither it can sound better). but else for a lot of modern songs 8bit would be more than enough ^_^.

you should always test at listening levels, testing louder is often a way to hear less.

post #1904 of 1923

8 bit isn't too far from what an LP record would be.

post #1905 of 1923
Quote:
Originally Posted by castleofargh View Post
 


8bit would mean quantization errors around -48db (8*6) that would be as you yourself have tested in the audible (maybe with dither it can sound better). but else for a lot of modern songs 8bit would be more than enough ^_^.

you should always test at listening levels, testing louder is often a way to hear less.

 

Quote:
Originally Posted by bigshot View Post
 

8 bit isn't too far from what an LP record would be.

 

So, basically 10-bit audio with 60 dB of headroom and proper dither would be indistinguishable from 16-bit in a normal listening environment, no matter how good one's audio gear is. I cannot hear -60 dB noise with the volume cranked up higher than even my loudest listening levels. Even if the noise is shaped/dithered down to -54 dB it is for all practical purposes inaudible in a normal listening environment due to being noise shaped out of the sensitive portions of the spectrum.


Edited by SharpEars - 8/23/14 at 12:31pm
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Sound Science
Head-Fi.org › Forums › Equipment Forums › Sound Science › 24bit vs 16bit, the myth exploded!