SharpEars
Head-Fier
- Joined
- Feb 14, 2014
- Posts
- 62
- Likes
- 17
The assertion:
After the audio goes below -54 dBFS it is largely irrelevant and virtually inaudible, classic music, high dynamic range music or any other music. To hear it (not to mention hear it loud enough to appreciate it) at that level you would have to increase the volume to such a ridiculous amount and be wearing such isolating headphones in such a quiet room that as soon as a loud passage started you would be (temporarily) deafened reducing your ability to hear (i.e., the dynamic range of your ears) to well below 54 dB.
-54 dBFS requires only 9-bits (You can add an extra bit for good measure, more room to dither, yada yada), so a 10-bit ADC / 10-bit DAC converter is transparent for music at all reasonable listening levels that do not cause permanent hearing damage. If you listen at levels that do cause permanent hearing damage, than you will need less than 10-bits relatively soon, since you will not be able to hear at low volumes at all without a hearing aid.
Virtually all recordings have less than 60 dB of dynamic range. It's easily measured in any good DAW on a track by track basis. Try it with the music you listen to that you consider has a higher dynamic range - you will be surprised! I am including audiophile 192/24 recordings in the mix by the way, so I do mean virtually all. By the way, when measuring dynamic range, any silent leader or trailer on the track should not be included, since it can (unfairly) skew results.
The proof based on the technicals combined with your own ears:
Anyone that thinks that more than a 10-bit sample depth matters needs to watch the following video: https://www.youtube.com/watch?v=BYTlN6wjcvQ starting from 45:48. Let your own ears with your own (high end) equipment be the judge.
Actually, if you want to do this test with complete accuracy, you can download the original .wav file used for this test at: http://ethanwiner.com/aes/bit_reduction.wav
I start hearing noise at around 18 seconds when playing the .wav file which equates to a 7-bit depth, when listening on Sennheiser HD650 headphones connected via a balanced cable to an OPPO HA-1 fed via asynchronous USB (i.e., I don't think anyone can call my system "low res") with the volume set quite loud in a very quiet room. Let me repeat, 7-bits is enough to transparently encode this song when it is listened to with the equipment I just mentioned.
Now if you want to do the test yourself, get the .wav file at the link I just posted, see at what second you can hear noise or any objectionable artifacts. Then play the youtube video (also linked above) starting at 46:18 for the number of seconds you played the .wav file. You can determine from the video at what bit depth you heard the "bad audio" or noise. That my friends is the easiest way to convince yourself that 10-bits is plenty.
The proof based on your own tests:
If you really want to go all the way, you can download the actual VST plug-in called +decimate that was used in the instructional youtube video and try it with your own DAW and your own music. I would love to hear the results. In fact, I've done all of the research for you.
Here is a link to the latest version of the VST collection that includes +decimate: http://www.soundhack.com/freeware/
You want to download the Delay Trio / Freesound Bundle from the top left column on that page. The actual plug-in you're looking for from the set is +decimate and can be found under VST/Effect/Sound Hack/+decimate in your DAW after it is correctly located and installed in your DAW software. On windows when I install it, it installs itself in c:\program files\common files\VST2, so I just added that redirectoy to my DAW and refreshed the VST list making it available.
Note: Some DAWs have their own mechanism for reducing bit rate. If you use this mechanism at very low bit rates, you should try it without dither, since at very low bit rates, the dither will be clearly audible and the whole point here is transparency.
My personal (anecdotal) experience:
I have some music high in transients that I was sure could use some major (i.e., 24) bit depth and it turned out that 5-bits was enough! I am both flabbergasted and speechless at this point. How can anyone even consider high bit depth audio again after performing this test?
Happy listening and point all of your audiophile friends to this thread to permanently "circumcise" them of their (high-end) 192/24 purchasing/listening habits.
After the audio goes below -54 dBFS it is largely irrelevant and virtually inaudible, classic music, high dynamic range music or any other music. To hear it (not to mention hear it loud enough to appreciate it) at that level you would have to increase the volume to such a ridiculous amount and be wearing such isolating headphones in such a quiet room that as soon as a loud passage started you would be (temporarily) deafened reducing your ability to hear (i.e., the dynamic range of your ears) to well below 54 dB.
-54 dBFS requires only 9-bits (You can add an extra bit for good measure, more room to dither, yada yada), so a 10-bit ADC / 10-bit DAC converter is transparent for music at all reasonable listening levels that do not cause permanent hearing damage. If you listen at levels that do cause permanent hearing damage, than you will need less than 10-bits relatively soon, since you will not be able to hear at low volumes at all without a hearing aid.
Virtually all recordings have less than 60 dB of dynamic range. It's easily measured in any good DAW on a track by track basis. Try it with the music you listen to that you consider has a higher dynamic range - you will be surprised! I am including audiophile 192/24 recordings in the mix by the way, so I do mean virtually all. By the way, when measuring dynamic range, any silent leader or trailer on the track should not be included, since it can (unfairly) skew results.
The proof based on the technicals combined with your own ears:
Anyone that thinks that more than a 10-bit sample depth matters needs to watch the following video: https://www.youtube.com/watch?v=BYTlN6wjcvQ starting from 45:48. Let your own ears with your own (high end) equipment be the judge.
Actually, if you want to do this test with complete accuracy, you can download the original .wav file used for this test at: http://ethanwiner.com/aes/bit_reduction.wav
I start hearing noise at around 18 seconds when playing the .wav file which equates to a 7-bit depth, when listening on Sennheiser HD650 headphones connected via a balanced cable to an OPPO HA-1 fed via asynchronous USB (i.e., I don't think anyone can call my system "low res") with the volume set quite loud in a very quiet room. Let me repeat, 7-bits is enough to transparently encode this song when it is listened to with the equipment I just mentioned.
Now if you want to do the test yourself, get the .wav file at the link I just posted, see at what second you can hear noise or any objectionable artifacts. Then play the youtube video (also linked above) starting at 46:18 for the number of seconds you played the .wav file. You can determine from the video at what bit depth you heard the "bad audio" or noise. That my friends is the easiest way to convince yourself that 10-bits is plenty.
The proof based on your own tests:
If you really want to go all the way, you can download the actual VST plug-in called +decimate that was used in the instructional youtube video and try it with your own DAW and your own music. I would love to hear the results. In fact, I've done all of the research for you.
Here is a link to the latest version of the VST collection that includes +decimate: http://www.soundhack.com/freeware/
You want to download the Delay Trio / Freesound Bundle from the top left column on that page. The actual plug-in you're looking for from the set is +decimate and can be found under VST/Effect/Sound Hack/+decimate in your DAW after it is correctly located and installed in your DAW software. On windows when I install it, it installs itself in c:\program files\common files\VST2, so I just added that redirectoy to my DAW and refreshed the VST list making it available.
Note: Some DAWs have their own mechanism for reducing bit rate. If you use this mechanism at very low bit rates, you should try it without dither, since at very low bit rates, the dither will be clearly audible and the whole point here is transparency.
My personal (anecdotal) experience:
I have some music high in transients that I was sure could use some major (i.e., 24) bit depth and it turned out that 5-bits was enough! I am both flabbergasted and speechless at this point. How can anyone even consider high bit depth audio again after performing this test?
Happy listening and point all of your audiophile friends to this thread to permanently "circumcise" them of their (high-end) 192/24 purchasing/listening habits.