Subjective Tests Indicate High-Resolution Audio Offers No Benefits
Mar 19, 2016 at 9:39 AM Post #16 of 42
   
It's got relatively little to do with paying attention to what one is doing. Many plugin processors available today up-sample, process and then down-sample again. Some plugins provide a selectable option to maintain the sample rate or up/re-sample, most do not. In many cases, exactly what's going on "under the hood" of a plugin is a trade secret and there's simply no way of knowing if resampling is taking place or not.
 
G

Yup, and every once in a while I came across a CD release that is just complexly ****ed up. These people need to pay the mastering guys to check their work out...
 
Mar 19, 2016 at 7:54 PM Post #17 of 42
Nope Nyquist doesn't mention bit depth because Nyquist is a mathematical theorem that deals with exact mathematical functions, not inexact (bit limited) oberservations.  It says NOTHING about the precision of information obtained from a certain precision of measurement. 
 
You can have a transformation that y=x^2  and if you know x then you know y "perfectly" right?  Nope..  if you know x=1 to 1% you  know y=1 to only 2%.  That's effectively a reduction in bit depth. G You're right that Nyquist says nothing about it just as most people\ would say anything about it in y=x^2, certainly a theoretical mathematician wouldn't, but that doesn't mean it isn't an issue for actual measurements.
 
Mar 19, 2016 at 8:18 PM Post #18 of 42
Now I didn't say that there definitely isn't berfect retention of 80kBytes/second of information between 0 to 20khz in 2byte depth of 40khz sampling.
In fact, averaged over all frequencies and time, **IF** the original signal was band pass filtered, then clearly 100% of the bits contain information about the 0 to 20khz range, because there is nothing else. If it wasn't band passed filtered first, then it's a whole different story and there is definitely headroom lost, quite obviously, at the very least. So the question is how are the bits distributed.   I think averaged over enough cycles it is probably distributed evenly over frequencies.  I am CERTAIN that averaged over a few cycles this is not always the case, where are few approaches infinity as frequency approaches closely but is still less than 20khz.   It's minor exception but enough to prove a point that one should be careful about overly absolute statements.
 
However I've certainly never seen where Nyquists theorem states anything about this, and lack of saying it certainly doesn't imply it.  But maybe at least for enough cycles it's true in the long run that bits are distributed evenly and probably short of 19.9khhz (even in the 40k example) we can't notice time intervals short enough to matter nor really loss of a bit or two either for that matter.  That's all fine.
 
But the question is still there about why should resampling be a problem.  It shouldn't be and if it is, then it means something is wrong with our sampling and analog conversion and if something is wrong at the level that resampling matters.  Then something is probably wrong at the level that bit depth also matters, not theoretically, but because of our inability to fully properly extract the information and render it in the correct way.  Now probably it doesn't matter at the level of the mistakes we make.  And I'd be surprised if one round of mistakes doesn't matter but two (resampling) does.  If things are really that close then probably 16 bits isn't enough, but I'm not saying that either.  There's something wrong with this picture though if good resampling once or twice or even 10 times (but 100 would be a different issue) is ever audible.
 
Mar 19, 2016 at 10:21 PM Post #19 of 42
seems like we have fallen into the "resampling is audible" a little too fast here.
 
Mar 19, 2016 at 11:17 PM Post #20 of 42
That may be true. I think it should not be audible, but that doesn't mean is isn't.
 
Still there are some interesting points here.
 
The fact is that higher sampling rate DOES add precision of information. It effectively adds bit depth (I'm not saying that helps or that we don't have enough bit depth), and this is a simple example of something Nyquist doesn't say about the issue.
 
The other fact is that neither representation is perfect.  They are both limited by bit depth.  So it's actually reasonable that information can be lost back and forth. 
 
However, if you know the phase of the up-sampling, even (especially) if you just do a simple re-sampling of the step function, you actually have quite obviously not lost anything.  And you can figure out that phase too by looking for identical repeated samples with such pairs spaced at the difference frequency.  It is then actually possible to reverse an upsampling perfectly.  The problem is playback algorithms aren't built on the assumption that it was resampled from 44.1 to 48, obviously, as there wouldn't be much point in doing the upsampling if they were.  The whole reason to upsample is to put the data into a processing path that's designed for native 48khz data.
 
Even so if everything is done as perfectly as possible except for clairvoyant re-interpretation of the upsampled data (and as prefectly as possible without that doesn't mean the dumb upsampling I described, which ironically isn't dumb at all if you do use the clairvoyance), and if this still  really does make audible problems, then it seems to me, maybe higher bit depth should be used in the first place.  I don't think that's the case though.  I think any audible problems can probably be removed just by better re-sampling algorithms because 16 bit should already provide enough overhead for a couple of rounds of processing error. 
 
It may turn out that such transparent algoritms aren't practical and don't exist in efficient forms.  I agree that's not established, but IF it is, then it still means we probably should be using higher bit depth to compensate and guard against this, (possibly even if there is no resampling done)
 
 
 

 
Mar 19, 2016 at 11:33 PM Post #21 of 42
.. oh and subjective tests can of course never prove that a perceptual difference doesn't exist.  They can only say exactly what they say... that if it does exist, it's not identifiable, in the test material chosen, by the people tested, with more than x% accuracy. But I guess this is understood.
 
I've participated in some abx where the average result of the abx over many many people "confirmed" the null hypothesis in this sense(mp3 bitrate abx), with a pretty small value of x even at that, and yet where I personally was able to pass the abx every time. I won't lie, I had the benefit of comments about what to listen for at specific instants in time in the pieces, but that doesn't change the point, in fact only enhances it: maybe I would have even failed, and yet I COULD hear the difference.  I've also heard effects where with some music I could never detect it, but with other music it was obvious. 
 
It's very hard to prove what doesn't impact things of course, but we can be pretty sure that if we're way past any effect that anyone has been able to make a positive abx for, repeatably, then, well, I'm open minded but I'm not an alien conspiracy guy.
 
Mar 20, 2016 at 5:40 AM Post #22 of 42
.. oh and subjective tests can of course never prove that a perceptual difference doesn't exist.  They can only say exactly what they say... that if it does exist, it's not identifiable, in the test material chosen, by the people tested, with more than x% accuracy. But I guess this is understood.

I've participated in some abx where the average result of the abx over many many people "confirmed" the null hypothesis in this sense(mp3 bitrate abx), with a pretty small value of x even at that, and yet where I personally was able to pass the abx every time. I won't lie, I had the benefit of comments about what to listen for at specific instants in time in the pieces, but that doesn't change the point, in fact only enhances it: maybe I would have even failed, and yet I COULD hear the difference.  I've also heard effects where with some music I could never detect it, but with other music it was obvious. 

It's very hard to prove what doesn't impact things of course, but we can be pretty sure that if we're way past any effect that anyone has been able to make a positive abx for, repeatably, then, well, I'm open minded but I'm not an alien conspiracy guy.

Just to remind everyone that multiple positive ABX results have been posted on this forum of the difference between redbook & the upsampled version of the same file - see here

AFAIK, these positive ABX results have never been shown to be anything other than the audibility of the resampling process.
 
Mar 20, 2016 at 10:44 AM Post #24 of 42
Well the question of "are all resamplers inaudible?" is much different than "why would you use one that is audible after just one generation?" I mean, help me out here: what do you guys hear if you take 44.1 material and convert it to 48 using something like SoX's VHQ filter?
 
Mar 20, 2016 at 11:24 AM Post #25 of 42
  Well the question of "are all resamplers inaudible?" is much different than "why would you use one that is audible after just one generation?" I mean, help me out here: what do you guys hear if you take 44.1 material and convert it to 48 using something like SoX's VHQ filter?

One would hear a 44.1kHz recording in a 48kHz container. Resampling as it has been used so far in this thread is really down sampling, e.g. going from 96kHz to 48kHz or even 44.1kHz. Going from 44.1kHz to 48kHz would be up sampling and up sampling does nothing to improve the original's fidelity.
 
Mar 20, 2016 at 11:30 AM Post #26 of 42
  One would hear a 44.1kHz recording in a 48kHz container. Resampling as it has been used so far in this thread is really down sampling, e.g. going from 96kHz to 48kHz or even 44.1kHz. Going from 44.1kHz to 48kHz would be up sampling and up sampling does nothing to improve the original's fidelity.

 
44.1 to 48 was mentioned earlier, which is why I asked. But the same question holds for decimation and interpolation: what horrendous things are people hearing after one stage of resampling (or one extra stage to return to the original rate)?
 
Mar 20, 2016 at 12:15 PM Post #27 of 42
   
44.1 to 48 was mentioned earlier, which is why I asked. But the same question holds for decimation and interpolation: what horrendous things are people hearing after one stage of resampling (or one extra stage to return to the original rate)?


I don't know since present day computers running up to date resampling programs don't leave audible traces.
 
Mar 21, 2016 at 8:08 AM Post #28 of 42
  Nope Nyquist doesn't mention bit depth because Nyquist is a mathematical theorem that deals with exact mathematical functions, not inexact (bit limited) oberservations.  It says NOTHING about the precision of information obtained from a certain precision of measurement.

 
Huh? That's exactly what the Nyquist/Shannon Theorem states. It states that 100% of the original information is retained, period. Bit depth does not come into this equation because bit depth does not have any effect. Whether it's one bit or a trillion bits, the Nyquist/Shannon Theorem is still true (providing of course one could actually process a trillion bits). There is no addendum to the Nyquist/Shannon Theorem which states that it's only true given exact (bit unlimited) observations, you've just invented that addendum yourself! If you want to prove the Nyquist/Shannon Theorem is incorrect, be my guest but just inventing irrelevant conditions isn't going to cut it!
 
G
 
Mar 21, 2016 at 9:47 AM Post #29 of 42
Originally Posted by gregorio /img/forum/go_quote.gif
 
This is not only incorrect but pretty much the exact opposite of what the Nyquist-Shannon Theorem states! The Nyquist-Shannon Theorem does not mention bit depth because bit depth is irrelevant, the Theorem states that ALL information is retained and perfect reconstruction is possible provided the signal is bandlimited to at least half the sample rate. This is true at ANY bit depth, even with just 1 bit and the Nyquist-Shannon Theorem does not mention, imply and definitely does not require infinite bit depth! In other words, the whole point of the Nyquist-Shannon Theorem is to guarantee that perfect reconstruction is possible of a 19kHz signal with a 40kHz sampling rate (using 16 or any other number of bits)!

 
   
Huh? That's exactly what the Nyquist/Shannon Theorem states. It states that 100% of the original information is retained, period. Bit depth does not come into this equation because bit depth does not have any effect. Whether it's one bit or a trillion bits, the Nyquist/Shannon Theorem is still true (providing of course one could actually process a trillion bits). There is no addendum to the Nyquist/Shannon Theorem which states that it's only true given exact (bit unlimited) observations, you've just invented that addendum yourself! If you want to prove the Nyquist/Shannon Theorem is incorrect, be my guest but just inventing irrelevant conditions isn't going to cut it!

 
Gregorio,
I believe you have made an assumption somewhere that I have missed. If you can clear your cache of assumptions and take a few moments to think carefully about what you have written (the bold above), you'll see that without sharing that assumption, we can only see the mistake. Not only does a review of the proofs of Nyquist-Shannon verify that we are talking about exact transform pairs x(t) <-> X(omega), with infinite resolution, but just use common sense on what you have written: 100% of the original information is retained...Whether it's one bit or... If quantisation error (noise) damages the information, you don't have 100% anymore. Were you assuming the data is already digitized? But Nyquist-Shannon deals with requirements before digitizing (sampling).
 
What were you assuming?
 
Mar 21, 2016 at 11:34 AM Post #30 of 42
Good to see your sensible input again S&M
I would add to your post that the Whittaker–Shannon interpolation formula which is the necessary adjunct to Nyquist-shannon - it's the method to construct a continuous-time bandlimited function from a sequence of real numbers & gives us an analog waveform

This requires infinite time for complete accuracy when you look at the differential equations but obviously approximations are acceptable once they remain below audibility.

Actual physical realisation of mathematical exactness is fraught with compromises!
 

Users who are viewing this thread

Back
Top