Bonus Content: A Short and Irreverent History of Consumer Digital Audio
A long time ago, Sony and Phillips got together to rewrite the whole audio reproduction chain. They created the Redbook CD standard, based on the minimum standards they believed were capable of producing good audio: 16 bit, 44.1K sampling.
Back then, if you were using a 16 bit/44.1K analog to digital converter, and a 16 bit/44.1K DAC, you *could* get bitperfect reproduction of the original samples. There were no sample rate converters, digital filters, or delta-sigma DACs "guessing" at the content.
1. In the early 1980s, 16/44.1 was pretty much beyond the limit of manufacturability for ADCs and DACs. The early players had 14-bit DACs and some hackwork applied, and might even have used a single DAC chip for both channels, multiplexed through a sample and hold. Yuck.
2. Linearity of many of these early DACs was pretty scary.
3. Non-oversampled 16/44.1 content requires a "brick wall" analog filter to eliminate out-of-band "images" (look it up, Nyquist Theorem, etc.). This means an 8th to 10th order filter. Which is expensive, hard to implement consistently with production-quality parts, and generally very scary.
But in the beginning, you could do bitperfect.
The rest of the story in consumer digital audio is a story of cost-cutting.
First cost cut: digital filtering (oversampling). That allowed manufacturers to throw out the analog brick wall filter, which was wwayyyyy cheaper. Unfortunately, all digital filter algorithms (except one) throw out the original samples in the process of upsampling. At that moment, the concept of "bitperfect" went out the window. Ah, well. It was cheaper.
Second cost cut: delta-sigma converters. Multibit converters are expensive. Delta-sigma is cheap. Simple as that. Add to that the fact you can claim "24 bit" or whatever on the chip, based on the data it will accept, but not on actual resolution, and yep, delta-sigma took over. Unfortunately, delta-sigma, even in its "multibit modulator" variants, still does not retain the original samples. They don't call them "successive approximation" for nothing. However, they have one huge benefit: they're so astoundingly cheap that you can have a very decent DAC in a throwaway laptop or cellphone.
Third cost cut: asynchronous sample rate conversion. Once high-res came onto the scene, manufacturers quickly realized that managing a whole bunch of sample rates and bit depths is a royal pain in the butt. Much cheaper just to upsample everything to one rate. Plus, you can put even bigger numbers on the box, like 192kHz.
That brings us to today. Most every DAC out there uses digital filtering and delta-sigma conversion. Everybody is guessing--the original samples are gone. Now, that isn't to say this can't sound very good. Hell, we make some of them, and they sound very good. But it is a mathematical fact that the original samples are lost in the playback process.
What we're trying to do, and what we will shortly introduce, is a DAC that is bitperfect from input to output, using the only closed-form digital filter and multibit DACs. This may be an insane goal in today's world, but, in our philosophy, "as true to the source as possible" starts with retaining the original samples. And that's what we'll do. It's up to you to decide if it's sonically meaningful, though...