Head-Fi.org › Forums › Equipment Forums › Sound Science › 24bit vs 16bit, the myth exploded!
New Posts  All Forums:Forum Nav:

24bit vs 16bit, the myth exploded! - Page 85

post #1261 of 1845

For speech you can get away with a lot less. See http://en.wikipedia.org/wiki/G.711 for example.

post #1262 of 1845

Mshenay--= www.research.philips.com/technologies/projects/cd/technology.html  A big debate occurred in the UK when Philips used 14 bit converters to make 16 bit cds they used --oversampling at first 4 times later much more. Many " Golden Ears" argued about the sound quality in relation to the later 16 bit-red-book recordings.  ---but thats subjectivity banned here isnt it? 

post #1263 of 1845

Modern converters have only a few bits of actual hardware resolution, and use even more oversampling and noise shaping to achieve ~20 bits or even better resolution in the audio band. Of course, "golden ears" still argue in favor of older style R2R converters and no oversampling. :normal_smile :

post #1264 of 1845
Quote:
Originally Posted by Mshenay View Post
 

move on to more LEARNING I actually wouldn't knowing the history behind the advances in Digitial Audio... was 16bit the first commonly used format? Or did they start WAY up HIGH in like 32bit 

Original digital audio recording experiments were 8 bits, mono, to prove the concept, then moved up from there.  The more bit depth, the more challenging from a hardware standpoint.  It was known quite early that 16 bits would be the ultimate goal.

 

This guy was a pioneer:

http://en.wikipedia.org/wiki/Thomas_Stockham

 

The article doesn't mention it, but he did 8 bit digital recordings in the 1960s which were actually pretty astounding. A/D was at high bit depths and adequate sampling rates was a big challenge, as was getting that data on and off a storage medium.  Data tapes were one of the original storage mediums.  HDDs were not even around, and when they were they had very low packing density initially.  Dr. Stockham's Soundstream system was 16 bits at 50KHz.

 

The reason CDs ended up at 44.1KHz is that the original recoding system used a slightly modified 3/4" U-Matic video recorder with audio data formatted to fit within a video frame.  44.1KHz at 16 bits in stereo was a rate that matched with the horizontal and vertical scanning frequencies of NTSC video (monochrome) at 30 frames per second, 60 fields per second and a horizontal scan frequency of 15,750Hz.  Consumer video-based digital audio recorders initially sampled at 44.056KHz so they could utilize unmodified consumer video cassette recorders operating at the NTSC color frame rate of 29.97 frames per second, and horizontal scan of 15,734.26 Hz. That data could be placed on a redbook CD with negligible upward pitch shift.  Sony made the PCM-F1 and a related family of digital audio recoding adapters for video recorders.  They were all capable of 16 bits or 14 bits at user discretion.  Other manufacturers made similar devices, but stopped with 14 bits.

post #1265 of 1845
Quote:
Originally Posted by ferday View Post
 

analog has limits which cannot be overcome

 

digital has limits which can theoretically be overcome

 

One way to think of it is we live real-world that is analog by nature. Digital (or more accurately quantized discrete-time arithmetic) is a mathematical abstraction of real-world. For most real-world signals, digital is only an approximation. Analogue recording is just one possible means of data storage. Digital recording is another method of data storage. The question that should be asked (but is hardly asked) is when is the digital capture of a signal a good approximation of the real signal? At some point the resolution of the sampling process greatly exceeds the resolution of the useful information contained in the signal.

post #1266 of 1845
Quote:
Originally Posted by Digitalchkn View Post
 

 

One way to think of it is we live real-world that is analog by nature. Digital (or more accurately quantized discrete-time arithmetic) is a mathematical abstraction of real-world. For most real-world signals, digital is only an approximation.

Agreed.  But analog is also an abstraction of real-world, and is also only an approximation.

Quote:
Originally Posted by Digitalchkn View Post
 

 The question that should be asked (but is hardly asked) is when is the digital capture of a signal a good approximation of the real signal? At some point the resolution of the sampling process greatly exceeds the resolution of the useful information contained in the signal.

Oh, that's been asked, fundamentally, during the development of digital recording.  And, we have our answer, been using it for some time.  The test is if someone can discern a signal passed through A/D > D/A from an original (not recorded) live version.  Been done many times. My own experience was mentioned here. And that was in the mid 1980s.

 

The question isn't asked much, informally, today probably because the test is hard to do, and at this stage, pretty much proven.

post #1267 of 1845
Quote:
Originally Posted by Digitalchkn View Post
 

 

One way to think of it is we live real-world that is analog by nature. Digital (or more accurately quantized discrete-time arithmetic) is a mathematical abstraction of real-world. For most real-world signals, digital is only an approximation. Analogue recording is just one possible means of data storage. Digital recording is another method of data storage. The question that should be asked (but is hardly asked) is when is the digital capture of a signal a good approximation of the real signal? At some point the resolution of the sampling process greatly exceeds the resolution of the useful information contained in the signal.

 

It also depends on whether you are digitizing an already captured analog signal (say tape) or taking an all digital capture. Even though the SNR of a decent digital capture exceeds the SNR of almost all analog tape it is still imperfect so still adds a very very very small amount of noise. In the context of a medium with an SNR of say 75db the noise added by a a digitization with an SNR of say 96db will be utterly insignificant.

 

Taking a live source and a direct digital capture thereof will give you something which several members have opined as being completely indistinguishable from the source. However, that is not the same as capturing all the information. You added the proviso "useful information" which we can arbitrarily place where we like in terms of dynamic range and frequency response. 

post #1268 of 1845
Quote:
Originally Posted by jaddie View Post
 

Agreed.  But analog is also an abstraction of real-world, and is also only an approximation.

Oh, that's been asked, fundamentally, during the development of digital recording.  And, we have our answer, been using it for some time.  The test is if someone can discern a signal passed through A/D > D/A from an original (not recorded) live version.  Been done many times. My own experience was mentioned here. And that was in the mid 1980s.

 

The question isn't asked much, informally, today probably because the test is hard to do, and at this stage, pretty much proven.

 

Sure. Analog recording is an approximation of the real analog signal captured on a magnetic medium such as tape, or mechanical medium such as vinyl.

By the same token, you can argue that the signal at the far end of a cable is an approximation of the signal at the output of, say, the source.

 

I also agree this test is hard to do because you are introducing additional signal processing elements in your A/D -> D/A chain and as such are not isolating the actual quantization processes. In your experiments you have to consider the source signal.  How noisy was the source? What is it's spectral characteristics?

 

At some point, increased precision follows a law of diminishing returns.

post #1269 of 1845
Quote:

Originally Posted by nick_charles View Post

 

 



It also depends on whether you are digitizing an already captured analog signal (say tape) or taking an all digital capture. Even though the SNR of a decent digital capture exceeds the SNR of almost all analog tape it is still imperfect so still adds a very very very small amount of noise. In the context of a medium with an SNR of say 75db the noise added by a a digitization with an SNR of say 96db will be utterly insignificant.



 



Taking a live source and a direct digital capture thereof will give you something which several members have opined as being completely indistinguishable from the source. However, that is not the same as capturing all the information. You added the proviso "useful information" which we can arbitrarily place where we like in terms of dynamic range and frequency response.





 



Digital capture of an analog tape is already a copy of a copy of the input signal.



 



The concept  of "useful information" is not that vague. It is well understood in the context of Shannon's theorem as no loss of information. Anything beyond that is added to the signal is considered as "noise".

In practice, noise is unavoidable, correct? There is noise (you could argue distortion falls into category) added at every processing stage, be it in analog or digital domain. The amount of noise added determines whether information is lost or added in the process of recovering the original "desired" signal.



 





 

post #1270 of 1845
Quote:
Originally Posted by Digitalchkn View Post
 
Quote:
 
Originally Posted by nick_charles View Post

 

 

 

It also depends on whether you are digitizing an already captured analog signal (say tape) or taking an all digital capture. Even though the SNR of a decent digital capture exceeds the SNR of almost all analog tape it is still imperfect so still adds a very very very small amount of noise. In the context of a medium with an SNR of say 75db the noise added by a a digitization with an SNR of say 96db will be utterly insignificant.

 

 

 

Taking a live source and a direct digital capture thereof will give you something which several members have opined as being completely indistinguishable from the source. However, that is not the same as capturing all the information. You added the proviso "useful information" which we can arbitrarily place where we like in terms of dynamic range and frequency response.

 

 

 

Digital capture of an analog tape is already a copy of a copy of the input signal.

 

 

 

The concept  of "useful information" is not that vague. It is well understood in the context of Shannon's theorem as no loss of information. Anything beyond that is added to the signal is considered as "noise".

In practice, noise is unavoidable, correct? There is noise (you could argue distortion falls into category) added at every processing stage, be it in analog or digital domain. The amount of noise added determines whether information is lost or added in the process of recovering the original "desired" signal.

Yes, if the noise added is significantly below the noise floor of the original, it can be considered negligible. Not all processes in the digital domain add noise.  All processes in the analog domain, with the exception of complimentary noise reduction systems, add noise.

post #1271 of 1845
Quote:
Originally Posted by jaddie View Post
 

Not all processes in the digital domain add noise

 

 

 

In practice - sure they do. Even simple operation of gain requires a requantization step back to integer arithmetic (there is an exception of literarly linearly doubling the amplitude by simple bit shifting). Needless to say more complex math operations such as filtering, rate conversions all add computational noise. It's simply that the amount of noise added is typically substantially lower than doing the same operation through physical/analog circuitry.

post #1272 of 1845
Quote:
Originally Posted by Digitalchkn View Post
 

In practice - sure they do. Even simple operation of gain requires a requantization step back to integer arithmetic (there is an exception of literarly linearly doubling the amplitude by simple bit shifting). Needless to say more complex math operations such as filtering, rate conversions all add computational noise. It's simply that the amount of noise added is typically substantially lower than doing the same operation through physical/analog circuitry.

 

Sure, but "in practice", it's mostly below the noise floor of the original analog original and A/D.  We're doing that theoretical vs practical thing now.

 

It's why a lot of original material is recorded at 24 bits, run through post at 24 bits, then down-sampled to 16 bits for release. 

post #1273 of 1845
Quote:
Originally Posted by jaddie View Post
 

 

Sure, but "in practice", it's mostly below the noise floor of the original analog original and A/D.  We're doing that theoretical vs practical thing now.

 

It's why a lot of original material is recorded at 24 bits, run through post at 24 bits, then down-sampled to 16 bits for release.

Or >24bits or floating point math.  We are just about always adding noise the further we processing steps we take. Just because it is below the noise floor of the source, doesn't mean it's negligible. Noise sources will add. It's just in analog domain the contribution of noise sources is typically greater.

post #1274 of 1845
Quote:
Originally Posted by Digitalchkn View Post
 

Or >24bits or floating point math.  We are just about always adding noise the further we processing steps we take. Just because it is below the noise floor of the source, doesn't mean it's negligible. Noise sources will add. It's just in analog domain the contribution of noise sources is typically greater.

 

Right.  So let's add three bits of dither to 24 bit data containing a digitized analog noise floor of -90dBFS.  And what would be the result?   While we're at it, lets measure the result with that pesky A-weighting filter.

 
And, if you don't mind being in the real world, lets look at what we really get out of a real-world 24 bit ADC, which is more like 20 bit performance at best.  To that we're adding that processing noise to a noise floor already sitting at -120dBFS which is 24dB above the theoretical already.  
 
What would the end result be?
post #1275 of 1845

One dithered quantization to 24 bits adds roughly -144 dBFS A-weighted noise. That would need to be repeated more than 200 times to add up to -120 dB. Internal processing can easily have better than 24 bit PCM resolution, in fact, 64 bit floats are common in software.

New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Sound Science
Head-Fi.org › Forums › Equipment Forums › Sound Science › 24bit vs 16bit, the myth exploded!