24bit vs 16bit, the myth exploded!
Mar 30, 2015 at 1:55 PM Post #3,001 of 7,175
Sorry wrong thread!
 
Mar 30, 2015 at 2:23 PM Post #3,003 of 7,175
here we go again ^_^.
"IZ better so I must hear it". when your flute is recorded, the samples are still only 1value each time. and what's recorded will be the flute and all the noises in the room. compared to the actual values we would get for the flute alone, it's different, and a much bigger difference than what 16 vs 24bit does. still you don't tell symphony orchestras to record instruments one by one in an anechoic chamber right? but if you really believe in your stuff you should demand it, the "improvement" would be massive compared to 16bit noise/error (call it as you like).
so if a human is perfectly ok with noises unrelated to music that are pretty loud, why should he be concerned by lower noise that has been dithered to be even less noticeable? when you record a band you get still only one measure by sample, so are each instruments creating errors on the sound of the other one? that's the logic you're using efeuvete by saying that the quantization error change the sound of one instrument.
just accept that we're dealing with soundwave and we record the mess that is the total of sounds. in that mess 16bit does impact the sound only at about -96db on the record. that's a fact and thinking it's bigger than that is just not understanding soundwaves.
 
now does that noise matter? should we improve it? sure! why not, improvement is always ok. after you remove the noise in the studio from electric stuff, the noises from the human playing, the remastering process that will change the signal a good deal more than quantization noise, then the noise of your sound system that is not always below -96db, even if it is on measurements those measurements are done with specific output to get the best reading, then the noise in your room, then the noise of your own body. if we can reduce all those, then of course going to 24bit would become cool. getting a sense of proportions help finding out what matters and what doesn't. a small rain isn't my priority when I'm swimming.
 
Mar 30, 2015 at 2:49 PM Post #3,004 of 7,175
  I insist: I'm not talking about discerning distorsion nor noise against clear sound, but about different wave forms. An Eflat 5th octave note on a recorder and that very same note on a metal flute are just different, none of them is more distorted than the other.

 
By distortion we just mean "content that was not in the original signal." When you round sampled values to 24 or 16-bit values, you are adding distortion to the signal (255.9728 becomes 256; there's a distortion). What matters is what this rounding "sounds like", and as I said above, it sounds either like white noise when your signal amplitude is high, or it sounds like harmonic distortion when you signal values are low. Dither gets rid of the problem of the harmonic distortion at low levels, but adds broadband noise. Noise shaping fixes this by moving this noise up to the high frequencies. This has all been worked out.
 
Mar 30, 2015 at 3:15 PM Post #3,005 of 7,175
  Bigshot..."no one has been able to hear a -110dB noise floor when listening to music at normal volume levels"...
It's you who talk about noise, not me. I'm talking about two different wave forms. I mean, Is a male voice a distorted female voice or viceversa?


We talk about noise because most 16 bit content is dithered, which removes the rounding error and replaces it with a constant, low-level noise floor. If you don't dither, there is no inherent noise floor, but you get quantization distortion instead (which will also be hugely below the signal and completely inaudible). It's covered pretty well in this video (the whole video is worth watching, but the part specifically addressing bit depth is around 8 minutes in): http://xiph.org/video/vid2.shtml
 
Mar 30, 2015 at 3:54 PM Post #3,006 of 7,175
I'm afraid I'm going to be a solitary voice on this forum (Or at least, on this my one and only thread) but I must tell you this:
The misunderstanding about my opinion seems to be related to the use of the word "noise" because in real life there are as much kind of "noises" as there are sounds, musical or not. Please consider you surely will be disapointed with a flute without its characteristc noise because that noise gives to the flute a good part of its personality, and, of course, it has nothing to do with white or pink noises.
I think all of you say regarding "dither" and about all the processing of "noise inherent to digitalization" is perfect (And useful to my understanding of these things too) but I'm not talking about that noise (Nor about any noise for that matter), I'm talking about the diference between original sound and sampled sound. Whatever you do with sampled sounds, as sophisticated as it can be, you do it with... a sampled sound, which means, with a different sound of the original as your starting point; It can be no other way. And the more different when the less sampling frequency and the less deep (Here comes "my" 15,6 ppm difference).
For instance, when RRod says "content that was not in the original signal" he forgets that you never know "the original signal" for the obvious reason that you only can know and manipulate a "sampled signal".
And, please, do not consider me a kind of a sound maniac because I'm more than happy with my 128Kbs Mp3 music (Which, by the way, I found somehow difficult to differentiate from 320Kbs Mp3). All these ideas and tests came to me trying to keep the better digitized sound for one or two musical pieces which are special to me.
 
Mar 30, 2015 at 4:03 PM Post #3,007 of 7,175
  I'm talking about the diference between original sound and sampled sound. Whatever you do with sampled sounds, as sophisticated as it can be, you do it with... a sampled sound, which means, with a different sound of the original as your starting point; It can be no other way. And the more different when the less sampling frequency and the less deep (Here comes "my" 15,6 ppm difference).

 

Nyquist–Shannon sampling theorem <-- (Link)

 
Mar 30, 2015 at 4:15 PM Post #3,008 of 7,175
Yes, the sampled sound is technically different from the original. The part you're missing is that you can't hear the difference. Even if you could (you cant), it would be a moot point in practice because the noise floor of your amp (probably) and the ambient noise in your house (definitely) are going to be louder than the noise floor introduced by dithering.
 
Mar 30, 2015 at 4:15 PM Post #3,009 of 7,175
Mar 30, 2015 at 4:28 PM Post #3,010 of 7,175
Yes, the sampled sound is technically different from the original. The part you're missing is that you can't hear the difference. Even if you could (you cant), it would be a moot point in practice because the noise floor of your amp (probably) and the ambient noise in your house (definitely) are going to be louder than the noise floor introduced by dithering.


"...The part you're missing is that you can't hear the difference..."
So, then you go back to the begining, I mean, you go to the "Seemingly Universal Law" of "If you hear differences you're bias dreaming"
Well, I don't follow that law more than I can follow any other opinion; mine for instance.
(And please, I repeat, I'm not talking about noise, I'm talking about wave differences)
 
Mar 30, 2015 at 4:34 PM Post #3,011 of 7,175
  I'm afraid I'm going to be a solitary voice on this forum

 
Welcome to Sound Science! We operate on different principles here. Here, we try to understand the acoustic and electrical principles, and try to avoid flowery descriptions and subjective error.
 
  you go to the "Seemingly Universal Law" of "If you hear differences you're bias dreaming"
Well, I don't follow that law more than I can follow any other opinion; mine for instance.
(And please, I repeat, I'm not talking about noise, I'm talking about wave differences)

 
There are ways to prove whether a difference exists. You can analyze the waveform or do controlled listening tests. Both of those methods come to the same result. The differences between high bitrate and redbook all lie outside of the range of human hearing.
 
Mar 30, 2015 at 4:36 PM Post #3,012 of 7,175
I think all of you say regarding "dither" and about all the processing of "noise inherent to digitalization" is perfect (And useful to my understanding of these things too) but I'm not talking about that noise (Nor about any noise for that matter), I'm talking about the diference between original sound and sampled sound. Whatever you do with sampled sounds, as sophisticated as it can be, you do it with... a sampled sound, which means, with a different sound of the original as your starting point; It can be no other way. And the more different when the less sampling frequency and the less deep (Here comes "my" 15,6 ppm difference).

And what you are failing to notice is that the sampled signal can be looked at as the original signal with a couple of modifications. Specifically, the sampled signal is the original signal with:
 
1) Bandlimiting. This removes frequency content above ~20kHz, and is due to the sample rate alone. This has nothing to do with bit depth
and
2) Quantization with either:
   a)Dither. This adds a low-level, uncorrelated noise floor (very much like tape hiss). The result will be the original signal plus a low-level his more than a hundred dB below the main signal level (for 16 bit).
   or
   b) Quantization distortion. If dither isn't used, you do get those slight rounding errors you mentioned before. This is called quantization distortion, since it is caused by quantizing the signal level into discrete bins. This tells you exactly how much difference there is between the original signal and the quantized version. This is worse (from an audibility perspective) than dither is, but even so, the level of distortion added is well below what has ever been shown to be audible (at least when 16 bits are used).
 
You don't have to sit here guessing about it - all of this has been extensively mathematically studied, quantified, and characterized. We don't need to wonder what it would sound like, nor do we have to wonder about how it will affect the signal.
 
Mar 30, 2015 at 4:40 PM Post #3,013 of 7,175
  2) Quantization with either:
   a)Dither. This adds a low-level, uncorrelated noise floor (very much like tape hiss). The result will be the original signal plus a low-level his more than a hundred dB below the main signal level (for 16 bit).
   or
   b) Quantization distortion. If dither isn't used, you do get those slight rounding errors you mentioned before. This is called quantization distortion, since it is caused by quantizing the signal level into discrete bins. This tells you exactly how much difference there is between the original signal and the quantized version. This is worse (from an audibility perspective) than dither is, but even so, the level of distortion added is well below what has ever been shown to be audible (at least when 16 bits are used).

 
And if he is interested in how those things work, he can click through the top link in my sig file and get the straight dope and downloadable examples.
 
Mar 30, 2015 at 4:41 PM Post #3,014 of 7,175
"...The part you're missing is that you can't hear the difference..."
So, then you go back to the begining, I mean, you go to the "Seemingly Universal Law" of "If you hear differences you're bias dreaming"
Well, I don't follow that law more than I can follow any other opinion; mine for instance.
(And please, I repeat, I'm not talking about noise, I'm talking about wave differences)


I'm not invoking any universal law, and I'm less quick to call bias than most people here. We are talking about a specific sound you claim to hear, and it's physically impossible for you to do so. Imagine if I said I could hear the hearbeat of a person across the street. Its just too quiet, below the limits of human hearing.

The only wave differences are in the form of inaudibly quiet noise. You clearly don't understand the technical concepts here. Learn how the Nyquist theorem and the process of dither apply to your question.
 
Mar 30, 2015 at 4:46 PM Post #3,015 of 7,175
I do not like to be boring so, I think this one will be my last post on this thread:
 
"In the field of digital signal processing, the sampling theorem is a fundamental bridge between continuous signals (analog domain) and discrete signals (digital domain). Strictly speaking, it only applies to a class of mathematical functions whose Fourier transforms are zero outside of a finite region of frequencies"
(Nyquist Theorem, Wikipedia; italics mine)
 
... And natural musical sounds do not observe that condition up to the detail required by the theorem.
 

Users who are viewing this thread

Back
Top