So my question is what’s the point of upsampling a source encoded in lower quality?
Upsampling/Oversampling has been used since digital audio was first released to consumers ~1984. The basic condition/requirement for digital audio is that all frequencies above half the sample rate (called the Nyquist Frequency or Nyquist Point) must be removed. So in the case of say CD, with a sampling rate of 44.1kS/s all freqs above 22.05kHz need to be filtered out. It’s easy to create an analogue filter to remove freqs above 22.05kHz but only if you start the filter far lower and have a large transition band. For example, start the filter at say 10kHz and have a transition band 12.05kHz wide, so the stop band is at 22.05kHz. The obvious problem with that is you’re going to be reducing the level of the content above 10kHz, which is within the audible band and therefore not ideal audible fidelity/transparency. The obvious solution is to have a filter starting above the audible band (20kHz) with much narrower transition band, ~2kHz. The problem is, it’s very difficult and expensive to create a good analogue filter with such a transition band. The other way to tackle the problem is to oversample, for example, if we double the sample rate, so it’s now 88.2kS/s, then the Nyquist Point is at 44.1kHz and we could design a filter with a relatively wide transition band, say a transition bandwidth of 24.1kHz but still starting outside the audible band at 20kHz. Problem solved. Or rather, it’s solved as far as the 2 required analogue filters are concerned (one in the ADC and then one in the DAC when converting back) but we now have the problem of a sample rate of 88.2kS/s when we needed 44.1. So we still need a filter in the ADC (called a decimation filter) with a stop band at 22.05kHz and a filter in the DAC (called an anti-image filter) also with a stop band at 22.05kHz before we oversample and hand it over to the analogue reconstruction filter. The big difference is we can now have these narrow transition band filters in the digital domain while the analogue filters have relatively wide transition bands. This was difficult in the early 1980’s because consumer digital hadn’t taken off and the available processing chips had relatively little computing power, so the digital filters were somewhat compromised. Of course, by the 1990’s consumer digital technology exploded and the power of processing chips increased by many orders of magnitude and it was no longer a problem. In fact, to relax the requirement on the analogue filters to even more trivial levels, 64x oversampling was introduced around 1987 and as castleofargh stated with just one bit oversampling. Then around the mid 1990’s multi-bit oversampling and then at even higher sample rates, 512x is common these days or even 1024x. Anyway, that’s the point of over/upsampling.
Upsampling can be used to apply "external" (e.g. in a computer) mathematical "better" reconstruction filters when going from digital to analog when using 16bit/44.1 kHz as source. This only "makes sense" when you use a DAC, which does not apply its own internal reconstruction filters after your nice external ("better") reconstruction filters.
Upsampling has nothing to do with external or internal processing and external processing in a computer isn’t mathematically “better”, it’s mathematically “worse”. Increasing the mathematical difficulty/processing power required to create a filter with the required properties (starting outside the audible band, with a stop band near the Nyquist Point) is worse, not better. And it does not “make sense” to buy a faulty DAC in the first place. A reconstruction filter is a required part of the digital to analogue conversion process, so a DAC that doesn’t have one is a faulty digital to analogue converter (DAC).
the crackpots state that the external filters are better than the internal ones of a DAC, due to more calculating power in an external computer as internally in a DAC.
Yep, that’s why they’re crackpots! “
More calculating power” solves the issue that existed 40 years ago but as that issue was already solved 25-30 years ago (in the chips inside DACs), it’s now just snake oil. If I need to calculate 1 + 1, I can do that with the cheapest calculator, giving it to the world’s most powerful super computer won’t make any difference to the outcome. You could probably devise some hugely convoluted alternative way of calculating 1 + 1 that actually requires a super computer but the result is still going to be 2, unless it’s broken!
If this is audible? Some say yes and some say no.
Again, the crackpots say yes, the reliable evidence and therefore the rational people say no.
Is that why oversampling is now called upsampling?
That’s a little complicated. Oversampling is technically a sub set of upsampling. Upsampling is just increasing the sample rate, while oversampling is increasing the sample rate by a multiple of the original. So going from say 44.1kFs/S to 48kFs/S is upsampling, while going from 44.1kFs/S to 88.2kFs/S is both oversampling and upsampling, although oversampling should really be the correct term in this case, because it’s more precise. While that’s relatively simple, the whole thing was complicated by the fact that so many didn’t understand or know it, so today the two terms have largely become interchangeable.
G