The Optimal Sample Rate for Quality Audio
May 18, 2012 at 5:13 AM Post #16 of 32
I have the qualification of being able to read.  From the top of the sampling theory whitepaper: "Nyquist Sampling Theory: A sampled waveforms contains ALL the information without any distortions, when the sampling rate exceeds twice the highest frequency contained by the sampled waveform."  This is simply incorrect, as mentioned by a couple other posts.  I'm not going to argue that most of a signal can be reproduced with a reasonable number of samples, but saying that all of it is reproduced (which is repeated, again in caps, later on the same page) is misleading.

The Nyquist theorem assumes infinite accuracy as well. And it's a mathematical theorem, with thorough mathematical proof behind it.
In the real world it would be impossible, of course.

Disputing the Nyquist theorem as being false, which is a theorem fundamental to signal processing, is like saying Pythagoras' theorem can't be true because you can't have 100% right angles in the real world.
 
May 18, 2012 at 9:28 AM Post #17 of 32
Quote:
I contacted Dan and asked him to take a look at the on-going discussion. He is busy with new designs; but took a minute to address the question of how many samples are required for good results:......snip.....

Thanks for getting back.
 
Perhaps I should re-phrase the question.
 
In the paper, it is alluded that by temporally shifting the sampling times for the same waveform, reconstruction will produce the exact same waveform.  If the reconstruction is a simple one sample only  S/H followed by a brickwall, I question the equivalence in both amplitude and phase. .  It is a simple thought exercise which should confirm/deny part of this.
 
Sample a one volt  1khz sine at 4khz.  Sample the sine at 0 degrees, 90, 180, and 270.  The data series will be 0, 1, 0, -1.   Now, sample it at 45, 135, 225, and 315 degrees.  The data series is now .7071,  .7071, -.7071, -.7071.  Using a simple S/H, these two data streams will clearly produce a different pattern which is put into the brickwall. The first is quarter milllisecond wide +/- rectangles of amplitude 1, the second is a half millisecond square wave, +/- .7071 amplitude.  Clearly the energy in the signal remains the same (1 squared (1) times .25 = .25, and .7071 squared (.5)  times .5 = .25, so the amplitude will be identical through a brickwall despite a clearly different input signal waveform.. allaying my first concern of amplitude... 
 
My question is phase.  Will this system preserve phase, more specifically, interchannel timing relationships, if two 5 or 10 Khz sines are simultaneously A/D'd, then Dac'd/filtered, where the two signals are temporally shifted from 0 to say 250 uSEc in steps of 5uSec.
 
I would scope the input and measure zero crossing difference, then output for same.  Graphing source delay vs output delay should produce a straight line.  The question is, does it?
 
j
 
May 18, 2012 at 9:41 AM Post #18 of 32
Quote:
If reconstruction occurs one sample at a time with no regard to past or future, then higher than Nyquist rates may be required to retain signal information.
 
If one were to sample a 1Khz sine at a 2 ksps rate, the outcome is phase dependent.  It could be you find only the zero crossings, you may find only the peaks.  If you reduce the measured sine to 999 hz, the digital stream will vary at a 1 hz rate, and the 1 sample reconstruction will change amplitude from zero to full scale at a 1 hz rate.  Brickwall filtering of that output will not recover from that modulation artifact.

 
Actually, it would, if the brickwall filter has sufficiently close to ideal properties. But the filter obviously needs to know what the signal was in the past (knowing future input would also be needed for a linear phase filter). The "one sample reconstruction" just creates ultrasonic mirror images of the audio spectrum, and adds high frequency roll-off depending on how the samples are connected. The modulation effect is the result of the presence of the 1001 (and 2999, 3001, 4999, 5001, and so on) Hz mirror frequency, below 1 kHz there is still only a 999 Hz tone.
 
May 18, 2012 at 10:01 AM Post #19 of 32
Quote:
 
Actually, it would, if the brickwall filter has sufficiently close to ideal properties. But the filter obviously needs to know what the signal was in the past (knowing future input would also be needed for a linear phase filter). The "one sample reconstruction" just creates ultrasonic mirror images of the audio spectrum, and adds high frequency roll-off depending on how the samples are connected.

Yep.  However, it is not intuitive (at least to me) that simply removing the aliasing above band retains temporal information at the 5 uSec level.  I've assumed the brick response removes the distinction between the two datastreams I mentioned.(not the signal you just responded to, but the post just prior to your.
 
Th 7220 (?) brick has a lot of depth, but I forgot it's internal organization, since it uses 60 12 bit coefficients folded, I assume the back half of the fold is the past, and the unfolded 60 is the future, the output being the center of the widgit...
 
 
It would be comforting to see the test I mentioned carried out.  Perhaps Dan already has done that?
 
j
 
May 18, 2012 at 10:12 AM Post #20 of 32
Quote:
Yep.  However, it is not intuitive (at least to me) that simply removing the aliasing above band retains temporal information at the 5 uSec level.

 
It does. If each sample of a digital signal is converted to an "infinitely" short pulse, and zero between the pulses, then the result is the "correct" analog signal (albeit attenuated to an "infinitely" small amplitude), and inifinite high frequency mirror images above the Nyquist frequency. Convolving this with an impulse response of a one sample long pulse will produce a crude "stair step" reconstruction, a two sample long triangle will result in linear interpolation, and convolving with an infinitely long sinc window creates the "ideal" reconstructed analog waveform; the relationship between any two of these methods of reconstruction is linear time invariant, i.e. a filter.
 
May 18, 2012 at 10:32 AM Post #21 of 32
Quote:
 
It does. If each sample of a digital signal is converted to an "infinitely" short pulse, and zero between the pulses, then the result is the "correct" analog signal (albeit attenuated to an "infinitely" small amplitude), and inifinite high frequency mirror images above the Nyquist frequency. Convolving this with an impulse response of a one sample long pulse will produce a crude "stair step" reconstruction, a two sample long triangle will result in linear interpolation, and convolving with an infinitely long sinc window creates the "ideal" reconstructed analog waveform; the relationship between any two of these methods of reconstruction is linear time invariant, i.e. a filter.

I used sample wide pulses to calc the energy, dirac's of course do the same assuming I scale to energy.
 
However, I'd still like to see how well the hardware holds temporal relations at the 5uSec level interchannel.  So asked lavrytech to pass it on.  Theory and hardware don't necessarily converge..
 
Thanks, it's been a long while...I think I need an aspirin..
 
j
 
May 18, 2012 at 11:30 AM Post #22 of 32
perhaps you misunderstood me.  short version: what lavry stated as the nyquist theorem in the paper is not the actual theorem.
Quote:
Disputing the Nyquist theorem as being false, which is a theorem fundamental to signal processing, is like saying Pythagoras' theorem can't be true because you can't have 100% right angles in the real world.

 
May 18, 2012 at 11:34 AM Post #23 of 32
perhaps you misunderstood me.  short version: what lavry stated as the nyquist theorem in the paper is not the actual theorem.

Then what is? (just curious)

I always thought it was something along the lines of
"A signal of which the bandwidth is limited by f0 can be perfectly reproduced if sampled at a frequency of 2fo given that the sample is of infinite length"
 
May 18, 2012 at 4:16 PM Post #24 of 32
Quote:
perhaps you misunderstood me.  short version: what lavry stated as the nyquist theorem in the paper is not the actual theorem.

 
The Nyquist theorem is correct and it is proven. It is based on the assumption that sampling is ongoing forever. It started at minus infinity and will go on forever. From a practical stand point, one needs to deal with the real world so sampling cannot start at the “beginning of time.” The Nyquist theorem is of great practical value, because it enables us to approach great accuracies. The good news is that one can approach accuracies within a small fraction of a second worth of samples. I am talking about a part in a million accuracy in such a short time.
 
In my paper Sampling Theory, I did provide both plots and text showing distortions due to truncation of the process (sudden abrupt start and stop) of the sampling process. The plots also show how the reconstruction gets more accurate as one gets further from those “end points.”
 
I wrote the paper with the purpose of trying to help those that are not educated in DSP, and for people that shy away from math. Some find it too complicated to comprehend, and others (such as yourself) take issue with the fact that I used common everyday description instead of quoting Nyquist in his own words. A more formal presentation would be much less comprehensible for most readers.
 
The theorem is perfect. I agree that the implementation of Nyquist theorem, is not absolutely perfect.. Is anything perfect in the real world? But we can and do reach accuracies far beyond what is needed, such as fractions of parts per million! From an engineering standpoint, the theory is perfect and it teaches us what it takes to approach perfection. And the good news is that getting 1 part per million inaccuracy does not take infinite time, it does not take a year, or a second. It takes a few milliseconds.
 
Dan Lavry
 
May 18, 2012 at 4:40 PM Post #25 of 32
I'm curious.
Do DAC's in the real world also use sinc functions to convert the signal? Or was it only used as an explanatory tool in this paper?

Do you perhaps have a reference to a good book or paper that explains how the signal is reproduced by the DAC in a more technical way? Or is this very complicated, meaning I should pick up a book on signal processing Fourier analysis?
 
May 18, 2012 at 6:39 PM Post #26 of 32
Quote:
I'm curious.
Do DAC's in the real world also use sinc functions to convert the signal? Or was it only used as an explanatory tool in this paper?
Do you perhaps have a reference to a good book or paper that explains how the signal is reproduced by the DAC in a more technical way? Or is this very complicated, meaning I should pick up a book on signal processing Fourier analysis?

I used the sync function for much of my explanation of sampling because it represents an ideal theoretical filter. That presentation is closer to Nyquist work, with the purpose of showing that it is possible to reconstruct signals with sinc functions (idea filters). The sync function is the ideal filter and it is also the idea interpolating function!
 
But further in the paper I talked (and showed plots) of the “error signal” which is the difference between the original signal and the sampled signal. I pointed out that when sampling according to Nyquist (sampling at a rate above twice the bandwidth)  the difference between the sampled and original (the error signal) is made out of frequencies above the Nyquist frequency (half the sample rate). So when you remove all the frequencies above Nyquist (frequencies above audio), you remove the difference between the original signal and the sampled signal, and that means no difference between the input and the outcome. Removing the high frequencies (above Nyquist) is done by filtering. One can use a practical filter to do just that.
 
Imposing a sinc function on each individual sample actually shows graphically how the math works out with theoretical brick wall filters which is good for proving the theory. That is the common presentation in DSP literature. I carried it further by pointing out that the errors reside at frequencies over Nyquist, thus filtering of those high frequencies (removing the error) “brings the original back”. I showed some plots of how it looks like before and after filtering. I included some example of such filters, and later talked about filter issues including oversampling.
 
I hope that clears it up.
 
The DSP books I know are written by Oppenhiem, Rabiner, Schafer, Gold and a few others. They do require math at an EE level. I don’t know if you should pick up a book, it depends on your level of interest and your background. For the most part, DSP is a very math based discipline. Also different books come with different emphasis. Sampling is not an intuitive process. I tried my best to simplify it. I also wrote some articles trying to simplify FIR and IIR filtering. It is difficult to make those subjects intuitive to nonprofessionals. It goes against their “common sense.” So many people still think of sampling as analogues to pixels (for video), while this is just flat out wrong.
 
Regards
Dan Lavry
 
May 18, 2012 at 11:49 PM Post #27 of 32
Quote:
The DSP books I know are written by Oppenhiem, Rabiner, Schafer, Gold and a few others. They do require math at an EE level. I don’t know if you should pick up a book, it depends on your level of interest and your background. For the most part, DSP is a very math based discipline. Also different books come with different emphasis. Sampling is not an intuitive process. I tried my best to simplify it. I also wrote some articles trying to simplify FIR and IIR filtering. It is difficult to make those subjects intuitive to nonprofessionals. It goes against their “common sense.” So many people still think of sampling as analogues to pixels (for video), while this is just flat out wrong.
 

 
In my school at the Technical training or diploma level, DSP is subject that my friends find constant problem with. It is a really tough subject without basic background in engineering mathematics. Generally I would recommend beginners to pick up a book in Calculus before they jump into reading DSP, at least that is how it works for me. Or else you need to find a streamlined textbook for EE. 
 
May 19, 2012 at 4:26 AM Post #28 of 32
[COLOR=000000]The DSP books I know are written by Oppenhiem, Rabiner, Schafer, Gold and a few others. They do require math at an EE level. I don’t know if you should pick up a book, it depends on your level of interest and your background. For the most part, DSP is a very math based discipline. Also different books come with different emphasis. Sampling is not an intuitive process. I tried my best to simplify it. I also wrote some articles trying to simplify FIR and IIR filtering. It is difficult to make those subjects intuitive to nonprofessionals.[/COLOR] [COLOR=000000]It goes against their “common sense.”[/COLOR] [COLOR=000000]So many people still think of sampling as analogues to pixels (for video), while this is just flat out wrong.[/COLOR]

[COLOR=000000]Regards[/COLOR]
[COLOR=000000]Dan Lavry[/COLOR]


Thanks for the explanation. I love math, which is exactly the reason why I'm drawn to this.

I'll be starting as a first year math student this September in university, so it may be better to just wait half a year and come back to this when I have more knowledge. I'll try to check out the books written by the authors you suggested anyway to see if I can follow what they are doing.
 
May 19, 2012 at 11:12 AM Post #29 of 32
May 19, 2012 at 6:14 PM Post #30 of 32
I'm curious.
Do DAC's in the real world also use sinc functions to convert the signal? Or was it only used as an explanatory tool in this paper?
Do you perhaps have a reference to a good book or paper that explains how the signal is reproduced by the DAC in a more technical way? Or is this very complicated, meaning I should pick up a book on signal processing Fourier analysis?

http://en.wikipedia.org/wiki/Delta-sigma_modulation#Digital_to_analog_conversion
 

Users who are viewing this thread

Back
Top