That was helpful! Are you also able to explain what the megaburrito filter does?
I'll take a stab at this too. Again, I will probably get some technical details wrong, or leave some out, but the general idea should be correct. Someone will probably tell me if I'm way off base.
So we've got our mulit-bit DAC and we're ready to feed CD audio to it at 44.1kHz sampling rate. Meaning 44,100 samples per second go to the DAC to then be transformed into an analog output waveform which we can then hear via our audio equipment. But there's a problem already before we even get started. It's called aliasing. Aliasing in digital audio shows up as high frequencies, above the normal cut off frequency, which aren't part of the music or the original signal. This aliasing is "noise" for all intents; it's not helpful. In a 44.1kHz system, this noise shows up above 44.1k / 2 = 22.05 kHz.
Early DACs (in CD players) used a very steep analog filter on the output which would dramatically reduce the output of the DAC at 22.05 kHz (and up from there). These filters are very, very steep. So they were nicknamed "brick wall filters" because that's what the graph looks like: A wall starting at the cut off frequency. The problem with these brick wall filters is, they introduce phase shifts. These phase shifts reach back past the cutoff point, into the audible band from 20kHz and down. Phase is related to time, so these phase shifts can also be thought of as time shifts, or time errors. Some call this "smearing" the time or phase. Whatever you call it, it's not what you want if you want a "true" signal coming out of your DAC.
So how do you fix this? One way of fixing it, is up sampling or oversampling. There's a technical difference between the two terms, which I honestly don't completely understand. Here's what you need to know: In this context, it means changing the sampling rate from 44.1 kHz UP to a higher frequency like 88.2 kHz, or even 176.4 kHz. Once you have your audio at this new higher sample rate, the aliasing noise gets shifted up to a higher frequency. ...and now we can use a filter at a high enough frequency that no phase shift gets back into the frequency band we can hear. By the time you get to 20kHz, there is no phase shift at all.
But how do we do this oversampling? By adding in more samples. If you go from 44.1 kHz to 88.2 kHz, you have to double the number of samples. If you go from 44.1 to 176.4 you have to have 4 times as many samples.
Well, how do you do *that*? You make up the samples by estimating or "guessing". You can't just repeat the samples, that wouldn't work. You have to interpolate between the existing samples and make up new data points that seem to fit in correctly. Simple averaging won't work either; that would make the waveforms start to look like triangle waves.
There are several ways of doing this and frankly I don't understand the math. I *do* know that (almost) every method of up sampling (oversampling) digital data involves successive approximation, which means multiplying each sample by some values, which transform those samples to new values. Pay attention here, this is the important part: As I understand it, this process throws away every single sample that is fed into it. Let's say you feed in 1 second of data, which is 44,100 samples. At the output you get 88,200 samples. How many of the 44,100 from the input get to the output untouched? I would have expected that all of them got through. But I would be WRONG about that. NONE of them make it through. All 88,200 samples that come out of this process are brand new. Unless I'm wrong about this (and I don't think I am) this is mind blowing. Upsampling (oversampling) destroys all of the original data!
This is how nearly every multi-bit DAC works. Very early DACs used an analog brick wall filter. All of the later ones used oversampling as I described above. Remember up above when I said that *almost* every method of oversampling throws away the original samples? I said that because ONE oversampling method does not throw away the samples. That method is Schiit's proprietary "megacombo burrito filter". The Mega filter uses math that is able to do these interpolations, to make up the new samples we need for our higher sampling rate, but it also keeps all of the original samples intact. So we get our upsampled data, but we also KEEP all of the original data too. When we feed 44,100 samples in, and get (for example) 176,400 out, all 44,100 of the original samples are included in the output. Intact. Unaltered.
The benefits of this are beyond my current technical understanding. Mike Moffat says that keeping the original samples preserves the timing information in the music. Timing information that would otherwise be altered by a conventional successive approximation upsampling method. I'm inclined to believe Mr. Moffat as his technical knowledge on this subject dwarfs mine. He's also devoted a huge chunk of his time (and his life) to developing this method and this math. So it's obviously rather important to him.
So that's it: The Megacombo burrito filter preserves all original samples when doing oversampling so that we can use our multi-bit DAC (at a high sample rate) and a gentle (non-brick wall) filter to remove aliasing noise, and not affect the audible band of frequencies.
Brian.