Is 192 kHz better or worse sounding than 96 kHz? Benchmark Media Systems answered.
Jul 30, 2016 at 11:31 AM Post #31 of 36
Isn’t “overly pedantic” redundant?
rolleyes.gif
(Now I’m being overly pedantic and probably wrong) As long as we’re both having fun and useful info is transferred, I don’t mind - I like it. “Learning is fun”, my mom used to say… when I try that on my son, he literally punches me.
I’d have to look up the exact wording/equations, but I’m certain that rather than bandwidth, one needs the highest frequency, i.e. always start from zero. Let’s consider your example moved down into the audible range, so no pedant comes back with “inaudible to old men”. Let’s say 1600-2000Hz, inclusive. So you claim that we would need >800Hz, for which 1000Hz works. Think it through: can I sample my 2000Hz pure tone at 1000Hz? No. I need >4000Hz. Is it clear? Perhaps you are thinking of amount of data. Yes, after I have sampled correctly and I do a Fourier transform, I can discard all the zeroes and make a much smaller file. True. But I have to sample at the higher rate first. I bet there is a way to create a special sampling process with a non-uniform sampling rate to have an average-lower sampling rate, but I assumed we are are talking uniform sampling rate.
Yes, when I wrote “slowly turn it up and down”, beating, as in your equation, comes to mind. It would have been a better example to say “quickly”, such that the amplitude varies, not sinusoidally, but as a square wave. Turn it up for a second, then all the way down for a second, and repeat. You will have LOTS of very low frequency content. Even easier I think would be to just take some music you have and FT it. Don’t you see non-zero components all the way down to, but perhaps not including the first value (DC)?

 
Well a bit of pedantry might sometimes be called for :wink:
 
What matters is that each frequency in the range has a unique representation. If you ONLY have frequencies in 1600-2000Hz, then at a rate of 800Hz all the frequencies DO in fact have unique representations (except 1600 and 2000, which alias, but this is the same for DC and 22050Hz at 44100Hz rate). If you wanted to analyze this via a Fourier transform, you would need to map 1600-2000 to 0-400Hz, but you can do this by f1 = f0 - 1600.
 
Take a 1kHz sine wave and apply, say, a 100Hz amplitude modulation, then highpass the result at something between. The amplitude modulation won't suddenly disappear, because in terms of frequency it doesn't add in a 100Hz sine, it spreads the energy of the 1kHz sine into 900Hz and 1100Hz. Those frequencies will indeed appear in the FFT after you do the modulation, but you won't see any energy at 100Hz (besides for typical windowing issues).
 
Jul 30, 2016 at 6:21 PM Post #32 of 36
   
Sure, that is what audiophilia is all about.  As a veteran of hundreds of bias controlled listening tests I can tell you that it is a fussy and boring business.  But comparing HD tracks to the same tracks downsampled to red book is an exception.  It is really easy to do and requires buying nothing.  It is more than religion.  It is stubbornness.

 
While I certainly can't hear any difference, I'm nearly positive that nobody else can reliably hear a difference either, otherwise there would be some definitive scientific proof.  It has been more than a decade and nothing has come about yet. If there is some difference, I'd have to say it is trivial to the general masses and completely insignificant to practically everyone.
 
Aug 1, 2016 at 7:32 AM Post #35 of 36
   
Last time I looked, this was an audio forum. Undersampling is irrelevant in this context.

 
It was a discussion about the exact meaning of the theory, and S&M seems interested in the details. Frankly we spend too much time beating each other up about what "Nyquist means" in this subforum, when we're all pretty darn sure that most of these people out there hearing the "superiority" of hi-res aren't hearing less aliasing or ringing.
 
Aug 3, 2016 at 3:04 AM Post #36 of 36
B and D. Again, in the sampling/information theory context, it does not depend on the amount/speed. But in the bandwidth limited case of audio and using current technology ADC’s, there is a speed/accuracy tradeoff. But for currently available ADC’s and DAC’s, does 192kHz reduce accuracy lower than 24 bits? If so, your point is good and current. If not, the amount/speed of data is not suffering from the speed/accuracy tradeoff.

 
I'm not quite sure what you mean by "reduce accuracy lower than 24 bits"? We can't achieve 24bits at any commercial sample rate and in any real world recording scenario, around 15bits is the theoretical maximum. Not withstanding that fact, 192/24 does reduce accuracy. At that speed/bit depth, reconstruction filters can't reduce alias images to less than about -80dB. Again, the Lavry paper explains the insurmountable speed/processing/bandwidth engineering issues. Benchmark and Digidesign/Avid have both independently made public statements along the same lines. Admittedly, 192kHz is unlikely to result in any audible deterioration of the signal (unless we consider the downstream possibility of ultrasonic content causing IMD) but as we're talking pedantic levels of accuracy here, 192kHz is well above the optimum and does reduce accuracy and obviously, 384kHz would be even worse. In practise, no educated pro sound engineer would choose to use 192kHz, unless instructed to do so by their clients (for marketing purposes).
 
C. I should not have used a 20kHz example, because of the question of audibility. You can hear 1kHz and it is not rare or extreme. But when your wrote ”Two data points per audio frequency cycle allows for PERFECT reconstruction”, that means a 2kHz sampling rate would be sufficient and it is not. By the way, you can say something about phase if you know the zero crossings, as I mentioned.

 
But I don't know the zero crossing, just by listening to a signal out of a speaker. Yes, we can talk about what will theoretically happen when trying to encode a 1kHz signal with a (theoretical) 2kHz sample rate but in practice, for any commercial audio material, 44.1kHz is the lowest sample rate a consumer will encounter. My original statement, "Two data points per audio frequency cycle" was a simplification, "two point two data points" (for example) would have been more exact. The point I was making is that 4, 8, 256 or an infinite number of data points, in order to capture more (or all) of the original waveform, does NOT provide MORE accuracy. At absolute best, more data points would produce exactly the same accuracy and once beyond a certain amount/speed, more data can only reduce accuracy. Which is pretty much the exact opposite of what MatsP and Baxide stated.
 
G
 

Users who are viewing this thread

Back
Top