KeithEmo
Member of the Trade: Emotiva
- Joined
- Aug 13, 2014
- Posts
- 1,698
- Likes
- 868
There is a technical detail about the sample rate chosen for the Red Book CD standard that needs to be mentioned to put the choices made into historical context.
As has already been mentioned, in order to encode an analog signal without serious distortion, the analog signal MUST BE bandwidth limited to avoid the Nyquist frequency.
So, for example, if you're encoding audio at a 44.1k sample rate to put on a CD, you MUST pass that signal through a sharp low pass filter that eliminates all content above 22 kHz.
Likewise, when the signal is reconstructed, you MUST again pass the output through a sharp low pass filter that eliminates all aliases above 22 kHz.
If you wish to maintain a flat frequency response, and minimal phase shift and distortion below 20 kHz, this calls for a filter that is flat up to 20 kHz, but has 70 - 80 dB of attenuation at 22 kHz and above.
This poses a serious technical problem... because any filter with performance even approaching these requirements is very complex to design.
Even worse, in order to build such a filter, you must use components that are precisely the correct value, and some of them are very expensive.
Virtually all modern ADCs and DACs use oversampling...
Oversampling essentially uses a 'trick" to allow the use of a filter that is far more gradual.
(This simplifies the process of designing a filter that is flat to 20 kHz, yet still provides excellent attenuation of aliases, and can be produced for a reasonable cost.)
HOWEVER, oversampling technology was NOT yet developed when the Red Book standard was created.
And, without oversampling, the design criteria for the proper filter are so extreme that, as a result, most early equipment performed quite poorly (and equipment that performed well was extremely expensive).
Without oversampling, given the requirements for encoding and decoding signals "right up to the Nyquist frequency", there is a tradeoff:
- either use a somewhat gradual filter, and accept a high-frequency roll off that starts well below 20 kHz, as well as significant high frequency phase shift and significant aliasing distortion
- design a very complex filter, which is difficult and expensive to produce, and still introduces excessive phase ripple and other problems
Oversampling has essentially eliminated this issue entirely... which is why it is so widely used.
However, since oversampling wasn't available when the standard was created, it was a bad idea to set requirements for the standard that impose such a serious compromise.
(Even raising the sample rate from 44.1k to 48k, as was recommended by some engineers at the time, would have significantly relaxed the tradeoff between cost, complexity, and performance.)
As has already been mentioned, in order to encode an analog signal without serious distortion, the analog signal MUST BE bandwidth limited to avoid the Nyquist frequency.
So, for example, if you're encoding audio at a 44.1k sample rate to put on a CD, you MUST pass that signal through a sharp low pass filter that eliminates all content above 22 kHz.
Likewise, when the signal is reconstructed, you MUST again pass the output through a sharp low pass filter that eliminates all aliases above 22 kHz.
If you wish to maintain a flat frequency response, and minimal phase shift and distortion below 20 kHz, this calls for a filter that is flat up to 20 kHz, but has 70 - 80 dB of attenuation at 22 kHz and above.
This poses a serious technical problem... because any filter with performance even approaching these requirements is very complex to design.
Even worse, in order to build such a filter, you must use components that are precisely the correct value, and some of them are very expensive.
Virtually all modern ADCs and DACs use oversampling...
Oversampling essentially uses a 'trick" to allow the use of a filter that is far more gradual.
(This simplifies the process of designing a filter that is flat to 20 kHz, yet still provides excellent attenuation of aliases, and can be produced for a reasonable cost.)
HOWEVER, oversampling technology was NOT yet developed when the Red Book standard was created.
And, without oversampling, the design criteria for the proper filter are so extreme that, as a result, most early equipment performed quite poorly (and equipment that performed well was extremely expensive).
Without oversampling, given the requirements for encoding and decoding signals "right up to the Nyquist frequency", there is a tradeoff:
- either use a somewhat gradual filter, and accept a high-frequency roll off that starts well below 20 kHz, as well as significant high frequency phase shift and significant aliasing distortion
- design a very complex filter, which is difficult and expensive to produce, and still introduces excessive phase ripple and other problems
Oversampling has essentially eliminated this issue entirely... which is why it is so widely used.
However, since oversampling wasn't available when the standard was created, it was a bad idea to set requirements for the standard that impose such a serious compromise.
(Even raising the sample rate from 44.1k to 48k, as was recommended by some engineers at the time, would have significantly relaxed the tradeoff between cost, complexity, and performance.)
ADC, not DAC. In DACs you have reconstruction filter which is a different thing.
Actually I said this in my #8. What I mean in #9 is that it would be stupid to bypass the filters.