macaque
New Head-Fier
- Joined
- Apr 23, 2010
- Posts
- 5
- Likes
- 0
Quote:
I don't think you understood my post or are relating it to the quoted paper correctly...also please explain your graphs and what they are actually from...
The ADC oversampling/aliasing portion is irrelevant to what I'm talking about as it is done before you get the music, the point is you obtain the music encoded at 16/44.1 and the assumption with good recordings is they have done the work to to ensure a good noise shaped data set.
Now, you have to play back the 16/44.1 data. You need a reconstruction filter with a sharp cutoff in the 20-22.05 KHz range. We have already concluded that doing this with analog components is impossible and as good as you can make it is still impossible to match 2 channels (or more) exactly with physical components. So by oversampling or upsampling you get to make a digital filter to do this part and use an analog reconstruction filter for the new sample rate at a cutoff so far away from any actual signal that it doesn't matter how bad it behaves near the cutoff or if the filters for each channel even match closely. (and the digital filters match perfectly for multiple channels)
So oversampling DACs have such a digital filter built in. How good is it? It depends on the DAC, how much of a data window they work with, the precision of their calculations, etc. Once all this is set it is stuck in the DAC forever. Look at any oversampling DAC data sheet and they will describe this digital filter (often they have more than one selectable with sharp or slow cutoffs).
There are really four choices for doing the 20-22.05 Khz filter:
1) non-oversampling DAC and analog filter (with accuracy and matching problems)
2) non-oversampling DAC and no filter (the paper describes potential problems) - this can be simulated by upsampling with no filter and playing back with an oversampling DAC. I've tried it - sounds pleasant until you compare it to doing proper filtering (on my system). I suspect this is what you are showing in your graphs.
3) oversampling DAC with its built in filter
4) upsampling before oversampling DAC. You get to make the filter and and the DAC filter is still present but has a cutoff far-away from the audio band so you are essentialy moving the 20-22.05 filter from it to you.
#4 is what I am doing because I believe I can make a better offline filter than the one built into any DAC - mainly because I have the benefit of much more information than the DAC could possibly have in real-time - and I can use higher precision math (64 bit floating point, 80 bit internal in the FPU).
This is not about finding "worthless interpolated data" it is about using the math to calculate the correct values of this data much the same way the DAC will be trying to do it only _better_. We know what the filter should be to get the right values for the new samples we can just implement a more accurate version of it. It is not curve fitting or any other kind of "connect the dots" interpolation, it is "applying the correct filter" interpolation. This is why you likely will get peaks above those in your original data set.
This whole exercise has nothing to do with adding or manipulating information - this is 100% about extracting the 16/44.1 to an analog waveform as accurately as possible and one of the biggest impediments to doing that it building the 20-22.05 filter.
Originally Posted by leeperry /img/forum/go_quote.gif simple link: Upsampling vs. Oversampling for Digital Audio oversampling in the DAC: you want it as high as possible to decrease aliasing and increase the conversion resolution. the AK4396 does it at 128X rate for all sampling rates. upsampling in the source: terrible idea IMO(and that link conclusion agrees), all it does is feeding worthless interpolated data and increase THD+N dramatically(the sound will appear brighter and more distorted): |
I don't think you understood my post or are relating it to the quoted paper correctly...also please explain your graphs and what they are actually from...
The ADC oversampling/aliasing portion is irrelevant to what I'm talking about as it is done before you get the music, the point is you obtain the music encoded at 16/44.1 and the assumption with good recordings is they have done the work to to ensure a good noise shaped data set.
Now, you have to play back the 16/44.1 data. You need a reconstruction filter with a sharp cutoff in the 20-22.05 KHz range. We have already concluded that doing this with analog components is impossible and as good as you can make it is still impossible to match 2 channels (or more) exactly with physical components. So by oversampling or upsampling you get to make a digital filter to do this part and use an analog reconstruction filter for the new sample rate at a cutoff so far away from any actual signal that it doesn't matter how bad it behaves near the cutoff or if the filters for each channel even match closely. (and the digital filters match perfectly for multiple channels)
So oversampling DACs have such a digital filter built in. How good is it? It depends on the DAC, how much of a data window they work with, the precision of their calculations, etc. Once all this is set it is stuck in the DAC forever. Look at any oversampling DAC data sheet and they will describe this digital filter (often they have more than one selectable with sharp or slow cutoffs).
There are really four choices for doing the 20-22.05 Khz filter:
1) non-oversampling DAC and analog filter (with accuracy and matching problems)
2) non-oversampling DAC and no filter (the paper describes potential problems) - this can be simulated by upsampling with no filter and playing back with an oversampling DAC. I've tried it - sounds pleasant until you compare it to doing proper filtering (on my system). I suspect this is what you are showing in your graphs.
3) oversampling DAC with its built in filter
4) upsampling before oversampling DAC. You get to make the filter and and the DAC filter is still present but has a cutoff far-away from the audio band so you are essentialy moving the 20-22.05 filter from it to you.
#4 is what I am doing because I believe I can make a better offline filter than the one built into any DAC - mainly because I have the benefit of much more information than the DAC could possibly have in real-time - and I can use higher precision math (64 bit floating point, 80 bit internal in the FPU).
This is not about finding "worthless interpolated data" it is about using the math to calculate the correct values of this data much the same way the DAC will be trying to do it only _better_. We know what the filter should be to get the right values for the new samples we can just implement a more accurate version of it. It is not curve fitting or any other kind of "connect the dots" interpolation, it is "applying the correct filter" interpolation. This is why you likely will get peaks above those in your original data set.
This whole exercise has nothing to do with adding or manipulating information - this is 100% about extracting the 16/44.1 to an analog waveform as accurately as possible and one of the biggest impediments to doing that it building the 20-22.05 filter.