24bit vs 16bit, the myth exploded!
Dec 11, 2016 at 5:13 PM Post #3,496 of 7,175
 
It's reasonable to demand more bits if you plan to engineer, mix, DSP, edit or do something with the tracks. If you only want to listen to them, then they're just placebophiles.

These forums are about listening. But that won't stop an audiophile from seeking more than necessary.
 
Dec 11, 2016 at 6:20 PM Post #3,497 of 7,175
Dec 11, 2016 at 8:14 PM Post #3,499 of 7,175
 
If you up-scale the numbers before using a 16 bit R2R DAC, you still have to scale it back down to 16 bits. once attenuated the entire DR is across a smaller set of numbers that are still quantized. So if the attenuation is such that the max value is 1024, then there are only 1024 steps of quantized resolution. Bring it down to 256, then there are even less steps of resolution. The question becomes, what is the audible effect? This is not the case in an analog world where depending on the design one might be battling the noise floor not quantization.

 
You first have to properly gain-stage your system. If you have to turn down the digital attenuation so you're only using 8 to 10 bits (256 to 1024 levels) at your usual listening level, your system is set up wrong. "Full digital volume" (no attenuation), on the music sources you usually listen to, should be enough to just drive your amplifier into clipping or at least be louder than you would want to listen to. If it's far too loud, you will need analogue attenuation between the DAC and amp. If it's not loud enough, you will need a preamp with gain. Once you have that established, you can use digital attenuation without problems. By the time you attenuate down to the levels you're talking about, it'll be so quiet you won't hear any audible problems.
 
Dec 11, 2016 at 9:05 PM Post #3,500 of 7,175
  So lock the thread with a final post saying?
 
"Conclusion: 24bit audio, all else being equal, is of no audible benefit for consumer replay."

but then you know it's exactly that kind of post that will alienate everybody who had some anecdotal experience of something that looked, at least to them, like a counter example.
it's the "all dacs sound the same" and "cables don't change the sound" kind of stuff. we're reaching out to people who can't tell the difference between an anecdote and global conclusive evidence, or between a lemon and a standard, so we can't possibly give conditional truth without conditions. when you do, some people are almost sure to misinterpret what you're saying:
-"he says it can't happen, but it happened to me that one time". conclusion, the guy is full of crap and objectivists are idiots.(too much? ^_^)
is what many people will think in the end.
 
I agree with with all those general statements myself as what should be expected, instead of as claims of how things will always be. they should be taught to all audiophiles. the audiophile starting pack of knowledge and expectations. so that when it's different, the audiophile will think that something is wrong instead of thinking he discovered iteration 284 of the one and only real sound(where is the facepalm emoji when we need one!).
 
when I say those stuff, it's like when I say usb is 5V,or that a short headphone cable will always be less than 1ohm. none of my USB devices are exactly 5V and I actually have a few short cables that are 1ohm or a little more.  so the statements are accepted standards, and expected to be true(within some manufacturing margin), but they're not claims about all USB power sources and all cables under all conditions. we have to make the distinction clear enough for people of all levels of knowledge and thinking. what is true under nominal condition vs what is true always.
 
so right now if you explain nothing else but 
"Conclusion: 24bit audio, all else being equal, is of no audible benefit for consumer replay."

in my head I go : "burden of proof lalalah".
evil_smiley.gif

 
Dec 12, 2016 at 12:42 AM Post #3,501 of 7,175
   
You first have to properly gain-stage your system. If you have to turn down the digital attenuation so you're only using 8 to 10 bits (256 to 1024 levels) at your usual listening level, your system is set up wrong. "Full digital volume" (no attenuation), on the music sources you usually listen to, should be enough to just drive your amplifier into clipping or at least be louder than you would want to listen to. If it's far too loud, you will need analogue attenuation between the DAC and amp. If it's not loud enough, you will need a preamp with gain. Once you have that established, you can use digital attenuation without problems. By the time you attenuate down to the levels you're talking about, it'll be so quiet you won't hear any audible problems.


Phones and many devices change gain by software or digital calculation as a means of volume control. Is one's phone or laptop properly gain staged for normal listening at 100% volume?  I don't think so. Who listens at 100% volume? Now for the fun, where is the proof that one cannot hear any audible problems when turning down the digital volume?
 
Dec 12, 2016 at 4:46 AM Post #3,502 of 7,175
 
Phones and many devices change gain by software or digital calculation as a means of volume control. Is one's phone or laptop properly gain staged for normal listening at 100% volume?  I don't think so. Who listens at 100% volume? Now for the fun, where is the proof that one cannot hear any audible problems when turning down the digital volume?


You have it backwards.  Where is the proof you can hear audible problems.  You can't prove the negative.
 
As DH says, properly gain staged digital volume is no problem.  24 bit on the playback end may give some leeway on that. 
 
One can gain stage poorly and have issues with analog volume too. At one time quite a few tube pre's were rather high gain and could put out high voltage levels like 20 or even 40 volts.  This was because it reduced distortion.  Used with tube power amps that would drive to near clipping with only .775 volts you had a bad combination.  Your pre had to be turned well down.
 
Dec 12, 2016 at 6:06 AM Post #3,503 of 7,175
 
You have it backwards.  Where is the proof you can hear audible problems.  You can't prove the negative.
 
As DH says, properly gain staged digital volume is no problem.  24 bit on the playback end may give some leeway on that. 
 
One can gain stage poorly and have issues with analog volume too. At one time quite a few tube pre's were rather high gain and could put out high voltage levels like 20 or even 40 volts.  This was because it reduced distortion.  Used with tube power amps that would drive to near clipping with only .775 volts you had a bad combination.  Your pre had to be turned well down.


Turing down the volume on an analog device is not the same as scaling a discontinuous stream of numbers. So who plugs IEMs into their laptop and turns up the volume all the way? Someone with ringing in their ears?
 
Dec 12, 2016 at 7:53 AM Post #3,504 of 7,175
  It's reasonable to demand more bits if you plan to engineer, mix, DSP, edit or do something with the tracks. If you only want to listen to them, then they're just placebophiles.

 
In most of those cases a higher bit depth audio file format makes no difference. In the case of editing, bit depth makes no difference. In the case of mixing/DSP it doesn't matter either, because DAWs create a virtual mix environment (commonly 64bit float) and whether you load 16bit audio files or 24bit audio files into that mix environment makes no practical difference. Where a 16bit or 24bit audio file depth does make a difference is in recording, where the increased dynamic range of 24bit is effectively employed as increased headroom, which is very useful because we can't predict before we start recording what the peak level/s are going to be. It's for this reason that pretty much all pro recording is done at 24bit.
 

Quote:
  Phones and many devices change gain by software or digital calculation as a means of volume control. Is one's phone or laptop properly gain staged for normal listening at 100% volume?  I don't think so. Who listens at 100% volume? Now for the fun, where is the proof that one cannot hear any audible problems when turning down the digital volume?

 
If we take your example of reducing the digital volume by 6bits (to 10bits) what we're left with is effectively 60dB of dynamic range. As hardly any commercial recordings exceed 60dB dynamic range then the 6bits you're loosing is nothing more than 6bits of noise floor or digital silence. In this example, the noise floor of a recording with 60dB dynamic range would be equal to the digital noise floor. In theory, the result of this would be an increase in the total noise floor of 3dB (summing together two equal level white noise sources). This of course should not be audible because you've reduced the volume of everything to start with by 36dB. However, this depends on your gain staging, as Don Hills tried to explain. Let's take a more extreme example for the sake of demonstration: Let's say you reduce your digital volume by 56dB (to 7bits) and then increase the amplification by 56dB. What we now have is a digital noise floor which is only 40dB below peak level, which in many cases would be easily audible. BTW, when I say "increase the amplification" I don't necessarily just mean whacking up the dial on your amp, very sensitive/easily driven cans or IEMs might effectively achieve a similar result. However, both of these cases (high amp or cans sensitivity) are examples of very poor, incorrect gain staging and is IMHO a common reason why audiophiles can sometimes apparently easily hear that which should be inaudible.
 
In the case of a phone, where you can only reduce digital volume rather than the amount of subsequent amplification, we can't prove it's not audible because we still have the variable of the headphone/IEM sensitivity. All we can say is that a phone with a competently designed output stage should allow for a fairly wide range of models of cans/IEMs without causing any audible problems. If you're significantly reducing your digital level to achieve a very quiet listening level then there absolutely should not be any audible issues but if you're reducing it by say 30dB or more in order to achieve a normal listening level, then you've got a gain staging problem and are entering the realm of audible issues.
 
G
 
Dec 12, 2016 at 3:49 PM Post #3,505 of 7,175
 
Turing down the volume on an analog device is not the same as scaling a discontinuous stream of numbers. So who plugs IEMs into their laptop and turns up the volume all the way? Someone with ringing in their ears?


Never said they were the same thing.  I said poor gain staging can cause problems in the analog side as well.  I said reasonable gain staging will mean digital volume control is a non-issue.  These ideas are true.
 
Since I know of no laptops with analog volume control what do most people interested in quality sound do with super-sensitive IEM's?  They don't plug them directly into the laptop.  Which means whatever they use instead can be chosen correctly for the purpose.  As most modern laptops have a 24 bit capable sound card it won't be an issue even with laptops. Laptops do usually have a higher noise floor in their outputs.  That isn't a digital volume control issue, rather simply a noise issue.
 
Dec 13, 2016 at 7:00 AM Post #3,506 of 7,175
@gregorio, I want to ask you something. Thanks for the enormous information you provide as a real person in industry. My curuosity is wouldn't be there always errors when capturing audio, because there are not numbers in real life, sound's exact starting and ending time, it's exact frequency and it's exact desibell?
 
My question is what is the minimal decimal of the exact frequency in a lossless in 44.1khz file? Does 44.1 kHz file tells us about the 400.000038 th Hz or it tells like 400,401,402nd frequency with a 1 hz of increment an so on? And does higher sampling rates like 96 kHz sampling increase  the accuracy or does it increases just the range with same accuracy? And If it does not increase it how can we increase it, or can't we increase in this sampling method?
 
And same thing about the desibell of that frequency, how can we increase minimally like 1db, 2db or 1.5432db? I don't think this will be infinitely small on a finite file. And does 24bit audio increases just the range of what we can capture or does it also increases the minimal increament 1.5db so that the accuracy?
 
 
THANKS!
 
Dec 13, 2016 at 8:22 AM Post #3,507 of 7,175
not Greg but when asking a question you have to consider your starting reference and a working model. if we're talking about anything in the world, then we don't know that there is any limit in accuracy aside from the limits imposed by our tools. quantum mechanic theories could suggest that nothing is perfectly analog and instead everything has some preferred quantified values.
 but obviously we really couldn't care less about that for music when people think that the real analog sound is a nail rubbing on a turning piece of wobbly vinyl.
 
so instead of asking about perfection, I suggest you start looking at the resolution of recorded music. the conditions, the tools, the post processing done on most albums when mastered.
let's say you start with 24bit, the room has noise, do you need to record the noises in the room to perfection? of course not. then the microphones have noise and generates some matter of distortions. after all, a mechanical device will not transmit 100% of the energy without any loss and some distortions from momentum are likely to occur. do you know what the most used microphones have as SNR for example? do you think we need a format that goes a lot beyond that other than for practical post processing purposes?
then to record a singer, you can't possibly have the recording set full scale, because what if the singer goes a little louder that one time take? so if you record at 24bit, you'll start a few db below that already. well that's not really a problem as no ADC or DAC can do 24bit anyway.
those are the conditions set for real life albums.
 
then we move on to playback, the DAC, the amp, all have some noises and distortions. the headphones/speakers have a lot of distortions, it's really not rare to have some at 60db below the signal and way higher in the low frequencies(and I'm being conservative here). so at the end of the playback chain, what resolution do you really need? does it matter if all the noises and distortions down at -100db aren't perfectly reproduced?
 
last but certainly not least, the human ear. for blind tests we count a 0.1db variation to be inaudible. that implies that a 0.1db error or less at full volume level will elude us. dunno for you but that sure made me think the first time I read about it.
 
when you start to look at real world examples, 24bit resolution playback is ironically excessive for a human being without an integrated µSD slot. everything is relative and even if you really wish to increase the resolution of your music, the file format is not the place where you should be looking.
 
Dec 13, 2016 at 9:57 AM Post #3,508 of 7,175
Thanks for the info, and I know mechanical devices has severe limitations. But as physics do, I asked my question regarding every other element of the chain is perfect, otherwise I should start from else as you said. But I just asked for the numbers, If accuracy is not increasing with 24 bit audio, only range is increasing, than there is no need for 24 bit for playback, as Greg says in the beginning of the forum. I just want to know limitations of todays digital format, even it is less important than the tempereture and moisture of the air while recording.
 
Dec 13, 2016 at 10:19 AM Post #3,509 of 7,175
  My curuosity is wouldn't be there always errors when capturing audio, because there are not numbers in real life, sound's exact starting and ending time, it's exact frequency and it's exact desibell?

 
Yes, there would be errors. The important question is, where do those errors occur? They mostly occur when we convert from one form of energy to another, for example: When converting sound pressure waves travelling through the air into electrical energy (by a microphone), when converting electrical energy back into sound waves (headphones/speakers) and lastly, when converting sound waves back into electrical energy again (the human ear). Digital audio doesn't directly capture audio (sound waves), what digital audio does is to measure the voltage and frequency of that electrical energy and store it as digital data, which can then be used to reproduce that electrical signal. Digital audio is many times more accurate than any of the transducers (those devices just mentioned which convert one form of energy into another) in the recording/reproduction chain and that includes the human ear! This is essentially the same as what castleofargh has said, just phrased differently.
 
Originally Posted by HAWX /img/forum/go_quote.gif
 
And does higher sampling rates like 96 kHz sampling increase  the accuracy or does it increases just the range with same accuracy? And If it does not increase it how can we increase it, or can't we increase in this sampling method?

 
In effect it just increases the range with the same accuracy. Increasing the accuracy is rather pointless, the accuracy is already significantly greater than the ear can discern and don't forget that you are effectively asking how to increase the accuracy of measuring and reproducing an electrical signal which is relatively very inaccurate to start with.
 
And does 24bit audio increases just the range of what we can capture or does it also increases the minimal increament 1.5db so that the accuracy?

 
Again, it effectively increases the range not the accuracy. The accuracy is already (with 16bit) tiny fractions of a decibel, far greater than that of the human ear. The original post of this thread explains all this.
 
If you are asking how do we improve the accuracy of capturing and reproducing audio, the answer lies in finding improvements the transducers (mics, headphones and speakers primarily). However, transducers are very well developed as they've been around for a century or so, and there hasn't been any really significant improvements for many years, just incremental improvements. And, until we find a way to improve (or maybe bypass?) the final transducer in the chain (the human ear), there is nothing to be gained by any increase in digital resolution.
 
G
 
Dec 13, 2016 at 10:58 AM Post #3,510 of 7,175
Thanks for the asnwer they just increase the range! Ok, I understand, but It really goes sideways, I don't ask how can we improve the overall quality, really I guess I'm not spesific enough.
 
I donwloaded from tone generator and from 35.7 to 35.1 hz and I can and most will hear the diffrence I guess. But does it like that indeed, or does my equipment is rounding off to 35 and 36 repectively? When it comes to speaker? I know even speaker will not play exactly 35 but there is an audible diffrence I'm questioning.
 
Do you have the numbers regarding how minimal we can increse the frequency and desbiell in a lossless file? Again, I'm not arguing increasing it would gain an audible advantage, but just asking it.
 
Is this correct in terms of minmal dB (or level) increment for 16 bit?
96 dB/65,000= 0.00147692307692.. dB (or level)?
 

Users who are viewing this thread

Back
Top