24bit vs 16bit, the myth exploded!
Jul 8, 2018 at 6:08 PM Post #4,891 of 7,175
Thanks all, got it now I think. My problem was that I was still looking at amplitude sampling as being recorded and reproduced as some fixed temporal event (sigh, stairsteps I guess, even though I thought I knew that wasn't the case) when just as in the frequency domain this isn't actually the case. The actual process is non-intuitive and difficult to visualize without a clear explanation and as a result is probably understood only by a small fraction of a percentage of those professing opinions concerning 'hi-res' audio. This is a truly great thread, thanks.

My next question, why does copper wire sound warmer than silver? :wink:
 
Jul 8, 2018 at 6:22 PM Post #4,892 of 7,175
Thanks all, got it now I think. My problem was that I was still looking at amplitude sampling as being recorded and reproduced as some fixed temporal event (sigh, stairsteps I guess, even though I thought I knew that wasn't the case) when just as in the frequency domain this isn't actually the case. The actual process is non-intuitive and difficult to visualize without a clear explanation and as a result is probably understood only by a small fraction of a percentage of those professing opinions concerning 'hi-res' audio. This is a truly great thread, thanks.

My next question, why does copper wire sound warmer than silver? :wink:

Sure does - if you add a couple dB between 200-300Hz at a low-Q and gently roll off the top, in the master!
 
Last edited:
Jul 9, 2018 at 6:09 AM Post #4,893 of 7,175
[1] The 'perfect waveform' is the part I'm missing, at least in the amplitude domain...Thanks all, got it now I think. My problem was that I was still looking at amplitude sampling as being recorded and reproduced as some fixed temporal event (sigh, stairsteps I guess, even though I thought I knew that wasn't the case) when just as in the frequency domain this isn't actually the case.
[2] The actual process is non-intuitive and difficult to visualize without a clear explanation and as a result is probably understood only by a small fraction of a percentage of those professing opinions concerning 'hi-res' audio.

1. Maybe the difficulty you were/are having is that you were trying to separate the issue into two different "domains" (amplitude and frequency)? While it's sometimes useful to do this for the sake of "vizualization" in reality they are not separate/different domains, they're exactly the same thing. A sine wave (for example) is effectively defined as: An increasing amplitude until a "peak" is reached, then a decreasing amplitude until the "trough" is reached and then an increasing amplitude again until the starting point is reached. We call this a "cycle" and frequency is simply the number of cycles per second. In other words, Frequency = Amplitude (over time).

2. As I mentioned in my first response to you, it's all a question of getting your head around it (or "vizualing" as you put it) and that requires an explanation which works for you personally. I'm not sure that "the actual process is non-intuitive", I think it depends on the "visualization" you have to start with. For example, if you take someone who's never thought about how digital audio works, who's effectively a "blank canvas" with no preconceived "vizualization" then I don't think the process is counter-intuitive but it's more difficult and more counter-intuitive if you do have preconceived notions of how it works (the "stairstep" notion for instance). Many audiophiles for example, seem to have the notion that digital audio is effectively analogue audio but with digital data, IE. Analogue audio creates an "analogy" of actual sound waves using an electrical current and digital audio creates an "analogy" of actual sound waves using digital data. This view/"vizualisation" is incorrect and leads to a bunch of further incorrect assumptions, such as; more data (bits or sample points) results in digital audio data which is a closer/higher resolution "analogy" to the actual sound waves. In reality though, digital audio is effectively just a sequence of data points which allows the sound waves to be "reconstructed" through the application of some mathematical processes, digital audio data is NOT analogous to the sound waves.

[1] I believe, and I'm no expert here so others may chime in, that dither just adds white noise to the signal which decorrelates the amplitude errors.
[1a] So the errors are still there but it now randomised at the cost of a higher noise floor which sounds like tape hiss ...

1. You're not exactly wrong but not exactly correct either. The way you appear to be looking at dither leads to some incorrect conclusions/assumptions. Instead of thinking about dither in terms of actual white noise, try more in terms of what it actually is, a mathematical function. Standard dither is usually abbreviated to TDPF, a Triangular Probability Density Function, which is a rather off-putting term for the layman but can be thought of as: A sort of mathematical equation which randomises errors, the end result effectively being; that ALL the error is converted into white noise.
1a. By looking at dither this way, you can hopefully see that your statement is incorrect: Firstly, the errors are not "still there", the errors are completely gone, they've been converted into white noise and Secondly, the "noise" is not higher, it's the same. The difference is that with dither we end up with a constant low level amount of white noise, while without dither we end up a non-constant amount of signal distortion but in both cases we end up with the same overall "amount". This is of course the logical conclusion using this view, as all we're doing is converting the error into white noise. ... Dither is a prerequisite of digital audio, without it the conditions required to achieve the "Sampling Theorem" cannot be met, in much the same way as not applying an anti-alias filter at half the sampling rate fails to meet the required conditions. Dither is therefore always automatically applied during the quantisation process, as is an anti-alias filter, both are intrinsic to the process of digital audio. In other words, dither does not raise the noise floor, it's what defines the noise floor in the first place!

We've also now got what's called noise-shaped dither, which became commercially available in the early 1990's, in response to the growing requirement of "re-quantisation". Re-quantisation became a requirement when high-end digital recording and mixing moved beyond 16bit (initially to 20bit) and therefore needed to be re-quantised down to 16bit for consumer distribution. Without another round of dither, the re-quantisation process would introduce "truncation error", which is effectively a slightly more severe form of quantisation error. Noise-shaped dither was introduced to effectively maintain the 20bit dynamic range but in a 16bit file format. As pinnahertz effectively stated, our resultant white noise is "shaped", it's no longer "white", it's concentrated in areas where our hearing is least sensitive and is therefore inaudible.

BTW, all the above is not exactly correct or incorrect either! It's just another way of looking at the issue, a way which avoids some incorrect conclusions/assumptions.

[1] Hence better perceptual sound quality = perceptual coding of a kind even if totally different from perceptual coding methods developped for music/high quality audio.
[2] You're splitting hairs here.

1. Throughout your argument with pinnahertz you seem to have missed the fact that "perceptual coding" has a specific and well defined meaning. You are incorrectly equating better perceptual sound quality with perceptual coding. "Perceptual coding" at least partially relies on "auditory masking" and reducing the amount of data by removing masked frequencies. On the other hand, "Better perceptual sound quality" can be achieved in numerous ways which do not rely on or even directly involve "auditory masking". A simple EQ or filter, noise reduction, compression, expansion or other processes can all produce better perceptual sound quality but are NOT "perceptual coding".

2. No, he wasn't. It's an important distinction with wide ranging ramifications. Plus, if we're going to accept your definition of "perceptual coding", what new/different term are we going to use for actual perceptual coding? It seems to be a bit of a trend, someone makes an incorrect statement of fact and when called out on it, responds with "you're splitting hairs", "just because I can't find the right words doesn't mean I'm wrong" or "you just like to argue for argument sake". I'm not sure if this type of response is simply an attempt to deflect or minimise the fact they've been caught making-up (or recounting) incorrect facts or whether it's because they really don't understand "science" or enough about "sound" to appreciate why made-up, incorrect statements of fact are so important.

G
 
Last edited:
Jul 9, 2018 at 6:59 AM Post #4,895 of 7,175
If you were asking how we get perfect input without the noise (maybe you did?), then that is not possible, as far as I know.

Well, to have "perfect" input would mean the ADC would have to have an infinite bandwidth with a virtual ground that is finite so that the signal to noise ratio would be infinite, all harmonics are captured. That is your "perfect" signal recording.

To have your perfect DAC, it would require an infinite slew rate, thus reproducing the signal and its harmonics that was recorded and the final line stage should continue this high slew rate at the same time be able to drive the transmission line without voltage reduction (due to current fold-over commonly in semiconductors).
 
Jul 9, 2018 at 8:09 AM Post #4,896 of 7,175
Well, to have "perfect" input would mean the ADC would have to have an infinite bandwidth ...

I presume you have some proof that the Nyquist/Shannon Theorem is incorrect? Or, are you saying that musical instruments and sounds produce harmonics of an infinite bandwidth and that microphones and mic pre-amps have an infinite bandwidth? If so, again, some proof or reliable evidence please. This is the sound science forum and if you are going to make claims which contradict the known science, then you MUST provide reliable evidence! Same for your claims of slew rate.

G
 
Jul 9, 2018 at 8:30 AM Post #4,897 of 7,175
1. Throughout your argument with pinnahertz you seem to have missed the fact that "perceptual coding" has a specific and well defined meaning. You are incorrectly equating better perceptual sound quality with perceptual coding. "Perceptual coding" at least partially relies on "auditory masking" and reducing the amount of data by removing masked frequencies. On the other hand, "Better perceptual sound quality" can be achieved in numerous ways which do not rely on or even directly involve "auditory masking". A simple EQ or filter, noise reduction, compression, expansion or other processes can all produce better perceptual sound quality but are NOT "perceptual coding".

2. No, he wasn't. It's an important distinction with wide ranging ramifications. Plus, if we're going to accept your definition of "perceptual coding", what new/different term are we going to use for actual perceptual coding? It seems to be a bit of a trend, someone makes an incorrect statement of fact and when called out on it, responds with "you're splitting hairs", "just because I can't find the right words doesn't mean I'm wrong" or "you just like to argue for argument sake". I'm not sure if this type of response is simply an attempt to deflect or minimise the fact they've been caught making-up (or recounting) incorrect facts or whether it's because they really don't understand "science" or enough about "sound" to appreciate why made-up, incorrect statements of fact are so important.

G

µ-law/A-law allows coding larger dynamic range in just 8 bits (data reduction) using auditory masking (loud sounds mask quieter sounds). Wikipedia calls it an early perceptual coding method, so take your anger to them.
 
Jul 9, 2018 at 9:21 AM Post #4,898 of 7,175
µ-law/A-law allows coding larger dynamic range in just 8 bits (data reduction) using auditory masking (loud sounds mask quieter sounds).

No, it does not use "auditory masking" (either frequency masking or temporal masking), it just uses compression/expansion. It's nowhere near as sophisticated as perceptual coding, which analyses about 1000 samples at a time, divides the frequency spectrum into 500 or so frequency bands, compares the results with a psycho-acoustic model and discards those bands which fall under the masking threshold. This is all completely different to just simple audio compression, there is no audio compression in perceptual coding!

G
 
Jul 9, 2018 at 9:38 AM Post #4,899 of 7,175
No, it does not use "auditory masking" (either frequency masking or temporal masking), it just uses compression/expansion. It's nowhere near as sophisticated as perceptual coding, which analyses about 1000 samples at a time, divides the frequency spectrum into 500 or so frequency bands, compares the results with a psycho-acoustic model and discards those bands which fall under the masking threshold. This is all completely different to just simple audio compression, there is no audio compression in perceptual coding!

G

Gregorio wrote: "there is no audio compression in perceptual coding!"

Finally, something we BOTH have been trying to hammer through thick skulls: Dynamic and lossy data compression are two different frickn' things! lol!

I personally use the term 'data-reduction' when referring to lossy/lossless codecs. MP3 for example = lossy data reduction. NOT 'lossy compression'. Prevents loads of confusion, of which there is plenty of in this thing we love called digital audio!
 
Last edited:
Jul 9, 2018 at 11:37 AM Post #4,900 of 7,175
I presume you have some proof that the Nyquist/Shannon Theorem is incorrect? Or, are you saying that musical instruments and sounds produce harmonics of an infinite bandwidth and that microphones and mic pre-amps have an infinite bandwidth? If so, again, some proof or reliable evidence please. This is the sound science forum and if you are going to make claims which contradict the known science, then you MUST provide reliable evidence! Same for your claims of slew rate.

G
First off Nyquest has nothing to do with the actual physical working of the musical instrument. Because I can change my strings on my guitar from D'adario silk-nsteel to nylon or martin steel stings and get a different set of harmonics from the instrument. Sampling rates for digital never coincided with any real analog standard but itself Nyquist never included all of the harmonics of the either. ADC don't have linear signal to noise ratios either, as well as bandwidth is different depending on how the ADC ic is assembled in the circuit. Its a shame the ic mfgs don't really publish the chart of Vref impedance to signal to noise and bandwidth. otherwise I would post one. But all what I said is common engineering knowledge on what happens there and what variables effect the track and hold circuit, and its settling time. About Slew rate with DACs: This is why high speed op amps are integrated into the DAC ic to overcome the old design issues of the I to V RC network (that generates a loss before amplification).

I will, just for you, look at google, and see what things pop up:

Look at pg18: http://www.delftek.com/wp-content/uploads/2012/04/National_ABCs_of_ADCs.pdf
Here is a glossary of terms that apply to ADCs and DACs: https://www.maximintegrated.com/en/app-notes/index.mvp/id/641
 
Jul 9, 2018 at 12:20 PM Post #4,901 of 7,175
Well, to have "perfect" input would mean the ADC would have to have an infinite bandwidth with a virtual ground that is finite so that the signal to noise ratio would be infinite, all harmonics are captured. That is your "perfect" signal recording.

Perfect means everything humans can hear perfectly reproduced. Bats and dogs might need a different kind of DAC than we do.

Inaudible harmonics are inaudible.
 
Last edited:
Jul 9, 2018 at 1:13 PM Post #4,902 of 7,175
[1] First off Nyquest has nothing to do with the actual physical working of the musical instrument. Because I can change my strings on my guitar from D'adario silk-nsteel to nylon or martin steel stings and get a different set of harmonics from the instrument. Sampling rates for digital never coincided with any real analog standard but itself Nyquist never included all of the harmonics of the either.
[2] ADC don't have linear signal to noise ratios either ... [2a] as well as bandwidth is different depending on how the ADC ic is assembled in the circuit.

1. Correct, the Nyquist/Shannon Theorem has nothing to do with physical musical instruments or any sounds/harmonics, it covers ALL instruments/sounds/harmonics. So, are you disputing the Nyquist/Shannon Theorem or not? If not, then you must be saying that musical instruments produce an infinite number of harmonics, please provide some supporting evidence for that claim.

2. What has that got to do with anything?
2a. Not to any significance, it depends on the characteristics of the anti-alias filters and of course the Nyquist point of a particular sample rate but again, what has this got to do with your assertion that ADCs have to have infinite bandwidth?

G
 
Jul 9, 2018 at 2:07 PM Post #4,903 of 7,175
Harmonic energy within the Nyquist limit should be recorded and reproduced accurately and harmonic energy above the limit (assuming the typical 20khz after filtering) would be inaudible so it wouldn't be relevant. Or as bigshot so succinctly put it, 'inaudible harmonics are inaudible', so why would it even be desirable to have infinite bandwidth?
 
Jul 9, 2018 at 5:02 PM Post #4,904 of 7,175
No, it does not use "auditory masking" (either frequency masking or temporal masking), it just uses compression/expansion. It's nowhere near as sophisticated as perceptual coding, which analyses about 1000 samples at a time, divides the frequency spectrum into 500 or so frequency bands, compares the results with a psycho-acoustic model and discards those bands which fall under the masking threshold. This is all completely different to just simple audio compression, there is no audio compression in perceptual coding!

G
Quantization noise fluctuates with signal level. Signal masks noise.
 

Users who are viewing this thread

Back
Top