xnor
Headphoneus Supremus
- Joined
- May 28, 2009
- Posts
- 4,092
- Likes
- 227
For speech you can get away with a lot less. See http://en.wikipedia.org/wiki/G.711 for example.
move on to more LEARNING I actually wouldn't knowing the history behind the advances in Digitial Audio... was 16bit the first commonly used format? Or did they start WAY up HIGH in like 32bit
analog has limits which cannot be overcome
digital has limits which can theoretically be overcome
One way to think of it is we live real-world that is analog by nature. Digital (or more accurately quantized discrete-time arithmetic) is a mathematical abstraction of real-world. For most real-world signals, digital is only an approximation.
The question that should be asked (but is hardly asked) is when is the digital capture of a signal a good approximation of the real signal? At some point the resolution of the sampling process greatly exceeds the resolution of the useful information contained in the signal.
One way to think of it is we live real-world that is analog by nature. Digital (or more accurately quantized discrete-time arithmetic) is a mathematical abstraction of real-world. For most real-world signals, digital is only an approximation. Analogue recording is just one possible means of data storage. Digital recording is another method of data storage. The question that should be asked (but is hardly asked) is when is the digital capture of a signal a good approximation of the real signal? At some point the resolution of the sampling process greatly exceeds the resolution of the useful information contained in the signal.
Agreed. But analog is also an abstraction of real-world, and is also only an approximation.
Oh, that's been asked, fundamentally, during the development of digital recording. And, we have our answer, been using it for some time. The test is if someone can discern a signal passed through A/D > D/A from an original (not recorded) live version. Been done many times. My own experience was mentioned here. And that was in the mid 1980s.
The question isn't asked much, informally, today probably because the test is hard to do, and at this stage, pretty much proven.
It also depends on whether you are digitizing an already captured analog signal (say tape) or taking an all digital capture. Even though the SNR of a decent digital capture exceeds the SNR of almost all analog tape it is still imperfect so still adds a very very very small amount of noise. In the context of a medium with an SNR of say 75db the noise added by a a digitization with an SNR of say 96db will be utterly insignificant.
Taking a live source and a direct digital capture thereof will give you something which several members have opined as being completely indistinguishable from the source. However, that is not the same as capturing all the information. You added the proviso "useful information" which we can arbitrarily place where we like in terms of dynamic range and frequency response.
It also depends on whether you are digitizing an already captured analog signal (say tape) or taking an all digital capture. Even though the SNR of a decent digital capture exceeds the SNR of almost all analog tape it is still imperfect so still adds a very very very small amount of noise. In the context of a medium with an SNR of say 75db the noise added by a a digitization with an SNR of say 96db will be utterly insignificant.
Taking a live source and a direct digital capture thereof will give you something which several members have opined as being completely indistinguishable from the source. However, that is not the same as capturing all the information. You added the proviso "useful information" which we can arbitrarily place where we like in terms of dynamic range and frequency response.
Digital capture of an analog tape is already a copy of a copy of the input signal.
The concept of "useful information" is not that vague. It is well understood in the context of Shannon's theorem as no loss of information. Anything beyond that is added to the signal is considered as "noise".
In practice, noise is unavoidable, correct? There is noise (you could argue distortion falls into category) added at every processing stage, be it in analog or digital domain. The amount of noise added determines whether information is lost or added in the process of recovering the original "desired" signal.
Not all processes in the digital domain add noise
In practice - sure they do. Even simple operation of gain requires a requantization step back to integer arithmetic (there is an exception of literarly linearly doubling the amplitude by simple bit shifting). Needless to say more complex math operations such as filtering, rate conversions all add computational noise. It's simply that the amount of noise added is typically substantially lower than doing the same operation through physical/analog circuitry.
Sure, but "in practice", it's mostly below the noise floor of the original analog original and A/D. We're doing that theoretical vs practical thing now.
It's why a lot of original material is recorded at 24 bits, run through post at 24 bits, then down-sampled to 16 bits for release.
Or >24bits or floating point math. We are just about always adding noise the further we processing steps we take. Just because it is below the noise floor of the source, doesn't mean it's negligible. Noise sources will add. It's just in analog domain the contribution of noise sources is typically greater.