22906
100+ Head-Fier
- Joined
- Feb 24, 2005
- Posts
- 385
- Likes
- 12
In the immortal words of Ice-T, "Don't hate the playa, hate the game"
castleofargh:
Let me start off by thanking you deeply for your criticism, without the likes of which scientific progress would not be possible. My response is as follows.
You are confusing "what the waveform looks like" with audio fidelity. No one has the right to say "what the waveform should look like," whether smooth, rough, staircased, sinusoid, or whatever. Visual representation in a graph is but one way of representing the real signal. The basis of the S-N theorem is that there exists something in reality called a "signal" which is independent of the way we choose to represent it (e.g. time domain voltage graph, FFT, sound pressure meter, what have you). My goal is simply to point out that the information contained in the signal is best preserved by a PCM encoding and R-2R conversion process, as sigma-delta conversion "smooths over" details that are present in the PCM encoded signal.
Because this allows for a lot of possible signals, we need an additional criterion to figure out which waveform we want, and that's where Nyquist comes in. The S-N theorem states that for a signal that was originally bandlimited such that it contains no content above 0.5Fs, the sampled representation is unique. In other words, for a given discrete time signal sampled at sample rate Fs, there is only one possible waveform that both passes through every sample point perfectly and contains no content above a frequency of 0.5Fs. So, this means that the only correct reconstruction* will meet the following 2 criteria:
Because this allows for a lot of possible signals, we need an additional criterion to figure out which waveform we want, and that's where Nyquist comes in. The S-N theorem states that for a signal that was originally bandlimited such that it contains no content above 0.5Fs, the sampled representation is unique. In other words, for a given discrete time signal sampled at sample rate Fs, there is only one possible waveform that both passes through every sample point perfectly and contains no content above a frequency of 0.5Fs. So, this means that the only correct reconstruction* will meet the following 2 criteria:
1) It must pass through each sampled point
2) It must contain no frequency content above 0.5Fs**
Saying a problem is old and that you're focusing on it isn't a personal attack, but since you want to play petty, I'll just drop you.
Indeed, that seems to be the best decision, I do not think m3_arun will ever be convinced by any counter-arguments, and it is just one of those "you cannot prove I am wrong under the rules I created" debates that keep going in circles.
What the signal is and isn't depends on our definition. That every sampled signal should have one and only one "shape" as measured by a voltmeter at the DAC output, is an arbitrary standard, and one that, I am arguing, causes people to ignore the inferior resolution of sigma-delta designs, because no, they don't pass through every sample point perfectly. That is what a dynamic resolution test would show, if we could design one. Audio signals are band-limited by convention to 20 Khz, which most people agree is the limit of human hearing. That means any audio signal that is sampled at >40 Khz will contain all the information necessary to reconstruct the original AUDIO signal perfectly, without extra filtering. Smoothness, and other visual criteria that look nice on oscillloscopes, are not required for proper digital to analog conversion.
People are focusing their criticism on my lack of support for low-pass filtering, because the low-pass filter is the most important part of the sigma-delta design.
Your second point is, I believe, incorrect. While the original sampled signal (what we define as the audio signal) should not contain frequency components above .5 Fs, the reconstructed signal will always contain aliasing, by the S-N theorem. But these aliases are not enemies, they are just unavoidable artifacts that do not affect the sound at all, because they are above the band-limit we have defined for our signal. In other words, the aliasing is not even part of the signal as we have defined it. If suddenly we decided that human hearing goes to 24 Khz, then yes, 44.1 Khz sampled audio would be introducing a lot of frequency noise, but until then, we are safe.
Thanks for not making science a personal issue.
- m3_arun
I'll let you find it yourself; hint: it's under the heading "Aliasing"