RRod
Headphoneus Supremus
- Joined
- Aug 25, 2014
- Posts
- 3,371
- Likes
- 972
Thanks for the elaboration, and basically confirms that I've given hi-res its own fair chance as far as my own personal testing.
Where things get a little tricky is that not all information can be expressed as a frequency. For example, a square wave is a collection of several harmonics, in particular proportions, and in a particular phase relationship to each other. If the frequency response of a device or recording isn't correct, then square waves played through it will both look unusual and sound wrong. However, if the frequency response is correct, but the phase response is wrong, you can end up with a signal that contains all the right amounts of energy at each frequency, but has a waveform that looks nothing like the original. Since our hearing seems to work mostly by analyzing energy like a spectrum analyzer, this difference is mostly inaudible. However, some research suggests that we are in fact somewhat sensitive to some aspects of the shape of the waveform itself - like the precise arrival time of the leading edge of it. One theory is that, even if we include all the audible frequencies up to 20 kHz so that we don't hear anything missing, because the missing harmonics above 20 kHz contribute to the overall shape of the wave, some other aspect of our hearing (perhaps the mechanism that figures out spatial location from phase relationships) may detect that the wave shapes are now incorrect, which may result in a perceived shift in the location of that instrument in the sound stage. In other words, the basic claim is that, even though we don't "hear" sound above 20 kHz, some of that information above 20 kHz does in fact contribute to other things we perceive about the sound - like its location - or even some other as yet not fully defined detail. And so we somehow sense when that information is altered or discarded. There have been some tests that at least suggest that this may happen - but they are far from conclusive. There is also lots of anecdotal evidence that a lot of people claim to hear a subtle difference.
A 1k square sine wave is worthless as a test when talking about complex music.
So what would it take to convince someone on such a point. My feeling is that I've taken an ideal 1k square wave at a silly high rate (256*48000), decimated it down to a 96ksps file A, sinc filtered everything above 20kHz to make file B, ABXed A and B and haven't hear a whit of difference, and that's getting pretty much as many leading edges into my ears as I can. Any phase errors introduced by either the resampling or my DAC+amp would be in there too.
The frustrating thing is that such a test is considered, on most of this site at least, nothing next to some guy "sensing" a difference in his transients in an sighted evaluation of two different masters of a recording made on tape.
The more bits there are the more accurate the sampling. If you convert an analogue signal to digital and back again you introduce quantisation errors, in both processes, leading to distortion. With 24 bits the size of the errors are about one thousandth of those occurring when 16 bits are used. This is what is relevant, not the dynamic range (which, as mentioned, is adequately covered by 16 bits).
You have assumed that, because the quantization errors are smaller, they are less audible, which is of course the exact thing that you need to prove.
Actually he is right as quantization errors are smaller, they are less audible. Higher bit rates do result less quantization noise. And higher bit rate do mean higher dynamic range. There is more difference in the loudest part and the softest part. A 16 bit recording has a dynamic range of near 120 dB. A 24 bit recording has max dynamic range 144dB.a human ear has dynamic range of 140 dB. So 16 dB recording is more than enough to completely cover up the quantization noise. In fact to hear the quantization noise there should be a music volume of 96dB higher than the quantization noise. And that's absurd. So 16 bit is more than enough. 24 bit is kinda unnecessary. Though human mind works in a strange way. And belief is the greatest religion in the universe.
If the quantization on 16 bits is already inaudible under normal listening conditions, then it isn't any less audible in 24-bits. So we're in agreement.
The more bits there are the more accurate the sampling. If you convert an analogue signal to digital and back again you introduce quantisation errors, in both processes, leading to distortion. With 24 bits the size of the errors are about one thousandth of those occurring when 16 bits are used. This is what is relevant, not the dynamic range (which, as mentioned, is adequately covered by 16 bits).
Great thread folks. I didn't read every post, but definitely enough to convice me that the limitations on my system have nothing to do with bit rates. CD quality recordings, properly mastered, are good enough for my 50 year old ears and if I want better sound I should spend more on my headphones.