Head-Fi.org › Forums › Equipment Forums › Sound Science › 24bit vs 16bit, the myth exploded!
New Posts  All Forums:Forum Nav:

24bit vs 16bit, the myth exploded! - Page 101

post #1501 of 1510
Quote:
Originally Posted by bbmiller View Post
 

Irrespective of there being any reason for recordings with a 24-bit depth to be better than ones with a 16-bit depth, do you think they are because the producers/audio engineers know when they produce a recording with 24-bit depth they are catering to people who care about sound and thus create better mixes/audio engineering for people who care about better sound?


Could be.  Remasterings are at least half the time much better.  Then again, I always did the best job I could in my career regardless of whom it was for.  I was grateful for those times I was allowed to go all out with the resources I needed, and do understand even if that is your desire your employer may not allow it.  Just to be clear my career was not in the music industry. 

post #1502 of 1510

I thought historically digital recording has always been done at 24-bit. Then after mastering, mixes and everything else are done it is re-sampled to 16 bit  for CD release.

post #1503 of 1510

If I remember, 24 bit started around 2000. Maybe a bit before. It was all pretty much 16 bit before that.

post #1504 of 1510
Quote:
Originally Posted by bbmiller View Post
 

Irrespective of there being any reason for recordings with a 24-bit depth to be better than ones with a 16-bit depth, do you think they are because the producers/audio engineers know when they produce a recording with 24-bit depth they are catering to people who care about sound and thus create better mixes/audio engineering for people who care about better sound?

 

24 bit allows for more flexibility in the mix. If you want to boost or compress something as you mix, you can do it without the noise floor coming up with it.

post #1505 of 1510

lots of information on 16 bit vs 24 bit and whether or not a remastered album recorded prior to the early 80's can be considered hi-res if it was recorded via analog tape (hint, they aren't) can be found here: http://www.realhd-audio.com/

post #1506 of 1510
Quote:
Originally Posted by bigshot View Post
 

 

24 bit allows for more flexibility in the mix. If you want to boost or compress something as you mix, you can do it without the noise floor coming up with it.

It also allows for performances to be recorded farther below 0dBFS, allowing for more headroom (and still maintaining an inaudible noise floor), in case the performer hits substantially higher levels in the performance than they did in the sound check.

post #1507 of 1510

Ah, but there is a (small) problem. Good analog disk mastering equipment effectively has more than 96db of dynamic headroom.

 

First example of hardware: the Neumann disk mastering console of the 1980's had a s/n of about 114db. The difference of it sonically in a fade-out was demonstrated by Bob Ludwig at a NY AES chapter meeting in the late 80's. He did a fade on the Neumann console and then from the Sony PCM 1630. The digital playback got quieter (and grainier) and then POOF!!. All gone. Suddenly. The analog fade continued until it was awash in the noise floor.

 

Next is the problem of how s/n is measured vs. the way we hear.  Noise floor measurements are done either 'A' or 'C' ( or 'B' too) weighted depending on whether flat or LF rolled off response is desirable for the purpose. Either way, you are calculating the sum of 10 octaves. Music, especially quiet music, has a much narrower bandwidth. In analog what happens is that you can get a n/s ratio where the broadband noise is of a higher value than the instrument(s) playing, but their output is above the noise floor over their smaller bandwidth.

 

This is what Mr. Ludwig so elegantly demonstrated.

post #1508 of 1510
Quote:
Originally Posted by Captain Duck View Post
 

Ah, but there is a (small) problem. Good analog disk mastering equipment effectively has more than 96db of dynamic headroom.

 

First example of hardware: the Neumann disk mastering console of the 1980's had a s/n of about 114db. The difference of it sonically in a fade-out was demonstrated by Bob Ludwig at a NY AES chapter meeting in the late 80's. He did a fade on the Neumann console and then from the Sony PCM 1630. The digital playback got quieter (and grainier) and then POOF!!. All gone. Suddenly. The analog fade continued until it was awash in the noise floor.

 

Next is the problem of how s/n is measured vs. the way we hear.  Noise floor measurements are done either 'A' or 'C' ( or 'B' too) weighted depending on whether flat or LF rolled off response is desirable for the purpose. Either way, you are calculating the sum of 10 octaves. Music, especially quiet music, has a much narrower bandwidth. In analog what happens is that you can get a n/s ratio where the broadband noise is of a higher value than the instrument(s) playing, but their output is above the noise floor over their smaller bandwidth.

 

This is what Mr. Ludwig so elegantly demonstrated.

16 bit digital audio (interestingly enough) also has a s/n in narrow frequency bands of about 110-120dB (or even more) with proper dither. You can also encode a waveform (with dithering) that has an amplitude that is less than half of your least significant bit (LSB). Because of this, with dither, you would not get that sudden dropout in the fade. Instead, you'd get the exact effect you described with the analog system: it would continue to get quieter until it is lost in the noise.

 

In addition, if you can even hear the "dropout point" at all on a non-dithered 16 bit digital system, that means that a signal with an amplitude of 1 LSB would have to be around 20-30dB in any normal room. That means that a full amplitude signal (0dBFS) would be upwards of 110-120dB, which is painfully, eardrum-damagingly loud. If you have the volume turned up this loud on your system, chances are you won't hear the dropout at the end of the fade anyways, since your ears will still be ringing from the volume of the peaks. Sure, you can intentionally record something 40dB down from full scale, and then you do have a problem, but the fix is simple: record closer to the full scale level.

 

I know this video has been posted a lot here recently, but it is worth watching, especially the part about dither starting around 12 or 13 minutes in. Notice how the noise floor is around -120dB when he enables dither with a 16 bit signal, and he can encode a 1/4 bit amplitude sine wave perfectly fine (-103dBFS or so): http://xiph.org/video/vid2.shtml


Edited by cjl - 4/17/14 at 8:36am
post #1509 of 1510

Hadn't looked at it that way (narrow band, encoding with dither). I doubt that the Sony PCM-1630 did anything more than straight encoding. Good video, too.

post #1510 of 1510

First i have to say that i am noob in all related to audio, at least high end audio. I have never understood the reasoning behind 24bit / 96khz for audio listening.

I read at some point that CD supports only 16bit / 44.1khz and its from 1hz - 22khz the limits of human hearing. As far as i know 99% of all music in the world is CD quality.

I think there are physical limitations to the existing recordings? Would re-sampling the masters do anything to them sound wise?

 

I have always felt the faults in sound quality is not because of the digital ( cd quality ) limitations, but in the mixing and mastering of the actual music. Also the Hardware used to record the music. I have heard a lot of music in my life, majority of music is poorly recorded, poorly mixed and i don't think that increasing the sample rates will fix that. At least that's how i feel.

I have some great recordings and they sound great even as MP3, but poorly recorded, mixed / mastered ones sound bad no matter what you do to them. I don't think any amount of pixie dust is going to make them better. This is a huge threat and i managed to only read the first 22 pages, barely understanding what people were saying. Hard to follow when you don't have deeper understanding on the subject.

 

Lets say that in theory the 24bit / 96khz would have better sound quality. Ok it becomes the "standard" and we start to demand music in that form. All the old music would gain nothing from this as they have the physical limitations? Newer produced music and recorded music would only "gain" in sound quality. 99% of our music would still be physically limited to the quality of "16bit / 44.1khz". I mean the physical limitations of hardware and the existing recordings we currently have. Vinyl is bellow CD in quality, at least according to some.

Its possible to re-master the older recordings that are in 24bit / 96khz form. But they have been recorded with limited gear (frequency limited to 20khz)?

So what does the consumer gain from the higher sample rates?

 

The downsides are far greater, we would have to buy all the music in DVD all over again with no evidence to support it sounds any better (at least nothing they could prove with empirical evidence). Also the file size would grow quite a lot. I am not sure how much but its possible it would be near 500mb for 1 song or even higher?

 

Since the release of CD and digital music piracy and downloading songs from internet has become a real bane for the music business.

When i write this i get the feeling the reason behind pressing the 24bit / 96khz onto the market is to try and kill piracy. At least thats what popped into my mind.

 

Sorry if this came out as just some noob ramble that makes no sense?

New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Sound Science
Head-Fi.org › Forums › Equipment Forums › Sound Science › 24bit vs 16bit, the myth exploded!