The Optimal Sample Rate for Quality Audio
May 8, 2012 at 5:51 PM Thread Starter Post #1 of 32

Lavry Tech

New Head-Fier
Joined
May 8, 2012
Posts
6
Likes
0
May 17, 2012 at 9:06 AM Post #3 of 32
May 17, 2012 at 12:22 PM Post #4 of 32
Well Scud80;
Seeing as the Forum is the "Science of Sound;" it would be appropriate to back up your assertion with some factual evidence as well as some mention of your qualifications.
 
Until that time it would only be logical to assume that you are the one that does not understand the Nyquist Theorum.
 
 
May 17, 2012 at 1:06 PM Post #5 of 32
http://lavryengineering.com/pdfs/lavry-sampling-theory.pdf

I find it amusing that you think we can perceive 48khz. 

I loved that paper.

Although the mathematics seemed a bit irrelevant at times, it was very interesting overall.


The Nyquist theorem should be enough evidence to support we don't need higher than 44.1kHz sample rates in terms of frequency extension.
But isn't it true that operating at 96kHz usually provides better performance than both 44.1 and 192? At least that is what can readily be noticed in both the measurements of DAC's and DAC chips.
 
May 17, 2012 at 1:26 PM Post #6 of 32
Quote:
Well Scud80;
Seeing as the Forum is the "Science of Sound;" it would be appropriate to back up your assertion with some factual evidence as well as some mention of your qualifications.
 
Until that time it would only be logical to assume that you are the one that does not understand the Nyquist Theorum.
 

Most people believe that in order to reconstruct the analog, one sample can be used at a time.  For example, just using a sample and hold at the output, then brickwall filtering it.
 
Unfortunately, to reconstruct perfectly requires knowledge of the stream before and after the instant.  If insufficient information is available, the absolute phase and amplitude may not reconstruct correctly.
 
The same basic problem occurs when doing an FFT.  Window size and type is important to the outcome.
 
j
 
May 17, 2012 at 1:33 PM Post #7 of 32
Most people believe that in order to reconstruct the analog, one sample can be used at a time.  For example, just using a sample and hold at the output, then brickwall filtering it.

Unfortunately, to reconstruct perfectly requires knowledge of the stream before and after the instant.  If insufficient information is available, the absolute phase and amplitude may not reconstruct correctly.

The same basic problem occurs when doing an FFT.  Window size and type is important to the outcome.

j

Isn't that pretty much part of the Nyquist theorem though?

One of the conditions for perfect reproduction is a sample of infinite length. Of course this is not possible in real life, but an incredible close approximation is possible with far less samples. I do not know how many samples this would be, though.
 
May 17, 2012 at 1:46 PM Post #8 of 32
Quote:
Isn't that pretty much part of the Nyquist theorem though?
One of the conditions for perfect reproduction is a sample of infinite length. Of course this is not possible in real life, but an incredible close approximation is possible with far less samples. I do not know how many samples this would be, though.

Yes.
 
If reconstruction occurs one sample at a time with no regard to past or future, then higher than Nyquist rates may be required to retain signal information.
 
If one were to sample a 1Khz sine at a 2 ksps rate, the outcome is phase dependent.  It could be you find only the zero crossings, you may find only the peaks.  If you reduce the measured sine to 999 hz, the digital stream will vary at a 1 hz rate, and the 1 sample reconstruction will change amplitude from zero to full scale at a 1 hz rate.  Brickwall filtering of that output will not recover from that modulation artifact.
 
j
 
May 17, 2012 at 1:49 PM Post #9 of 32
Just throwing this out here, but is the finite sample rate the cause of the high frequency roll-off of most DAC's?
Would these same DAC's have a brickwall frequency response when using an infinite sampling rate? Or is the roll-off caused by analog parts?
 
May 17, 2012 at 2:02 PM Post #10 of 32
Quote:
Just throwing this out here, but is the finite sample rate the cause of the high frequency roll-off of most DAC's?
Would these same DAC's have a brickwall frequency response when using an infinite sampling rate? Or is the roll-off caused by analog parts?

The brickwall gets rid of all the nasty high frequency stuff that is a consequence of the sampling/reconstruction.  Without it, there would probably be as much aliasing noise as music.
 
An infinite sampling rate is just another word for analog...
 
Perhaps lavrytech can elaborate for us on the tradeoffs between sampling rate, reconstruction word depth, aliasing, and brickwalls.
 
j
 
May 17, 2012 at 2:07 PM Post #11 of 32
The brickwall gets rid of all the nasty high frequency stuff that is a consequence of the sampling/reconstruction.  Without it, there would probably be as much aliasing noise as music.

An infinite sampling rate is just another word for analog...

Perhaps lavrytech can elaborate for us on the tradeoffs between sampling rate, reconstruction word depth, aliasing, and brickwalls.

j

I didn't mean sample rate, I meant sample size. As in, how many samples the DAC is looking at in order to do its mathemagic and reconstruct the analog waveform.

Given that the Nyquist theorem assumes an infinite amount of samples to reconstruct the waveform perfectly, I wonder what the effect would be of changing the amount of samples at constant sample rate.

The effects of wordlength are obvious, though. It defines the noise floor. At 16 bit this is -96dBFS, at 24 bit it's -144dBFS (which is impossible to achieve in the real world). Or did you mean something else with 'reconstruction word depth'.?
 
May 17, 2012 at 4:39 PM Post #12 of 32
Quote:
The brickwall gets rid of all the nasty high frequency stuff that is a consequence of the sampling/reconstruction.  Without it, there would probably be as much aliasing noise as music.
 
An infinite sampling rate is just another word for analog...
 
Perhaps lavrytech can elaborate for us on the tradeoffs between sampling rate, reconstruction word depth, aliasing, and brickwalls.
 
j


I contacted Dan and asked him to take a look at the on-going discussion. He is busy with new designs; but took a minute to address the question of how many samples are required for good results:
 
"At 44.1KHz, 1 second contains 44100 samples, so .1sec contains 4410 samples. There are 441 samples in 10 milliseconds.
At 96KHz, there are 960 samples in 10 milliseconds…
 
Clearly to reach perfection we need infinite time. But to stay practical one needs to define an acceptable performance goal. The answer depends on what one may want a filter to do. One may wish for good enough characteristics to satisfy say 24 bits accuracy (144db which is around 0.00000596%). One may need say 120dB (around 20 bits thus 0.0001%), or what not.
 
There are as many answers as there are possible filter designs. But in all cases, a good DSP designer can accomplish even a very demanding task with a few hundred samples, or even less.
 
Dan Lavry"
 
May 17, 2012 at 5:19 PM Post #13 of 32
I have the qualification of being able to read.  From the top of the sampling theory whitepaper: "Nyquist Sampling Theory: A sampled waveforms contains ALL the information without any distortions, when the sampling rate exceeds twice the highest frequency contained by the sampled waveform."  This is simply incorrect, as mentioned by a couple other posts.  I'm not going to argue that most of a signal can be reproduced with a reasonable number of samples, but saying that all of it is reproduced (which is repeated, again in caps, later on the same page) is misleading.
Quote:
Well Scud80;
Seeing as the Forum is the "Science of Sound;" it would be appropriate to back up your assertion with some factual evidence as well as some mention of your qualifications.
 
Until that time it would only be logical to assume that you are the one that does not understand the Nyquist Theorum.
 

 
May 17, 2012 at 10:51 PM Post #14 of 32
Quote:
Just throwing this out here, but is the finite sample rate the cause of the high frequency roll-off of most DAC's?
Would these same DAC's have a brickwall frequency response when using an infinite sampling rate? Or is the roll-off caused by analog parts?

 
Roll off you see are in both amps and dacs is the result of filtering. Leaving ultrasonics to play in the DACs or Amps will cause increased distortion and noise characteristics, it is standard design.
Quote:
Isn't that pretty much part of the Nyquist theorem though?
One of the conditions for perfect reproduction is a sample of infinite length. Of course this is not possible in real life, but an incredible close approximation is possible with far less samples. I do not know how many samples this would be, though.

I have never tried 192 but indeed, 96khz does give noticeably(on graph, and for me audible due to my DAC) better measurements than 44.1in terms of THD and IMD notably. I think that with 96khz, it is the optimal sampling frequency, with 192khz, physics kicks the DACs in their balls and thus giving worse measurements for 192khz. Of course in audio, pseudo science "more is better!" kicks in together with other factors, give the misconception that 192khz is audibly better when they might just be hearing increased distortion/ expectational bias.  
 
May 17, 2012 at 11:00 PM Post #15 of 32
i would be pretty surprised if any human can actually hear a difference between 96 and 192kHz sampling rate.  even if there's somebody that can i doubt the difference is worth doubling the storage and processing requirements (or more, i'm not really familiar with whether or not decoding/processing is linear with sample size).
 

Users who are viewing this thread

Back
Top