24bit vs 16bit, the myth exploded!

Discussion in 'Sound Science' started by gregorio, Mar 19, 2009.
310 311 312 313 314 315 316 317 318 319
321 322 323 324 325
  1. RRod
    I think the argument is that you can't guarantee that the engineers will use shaping in the end, though I would assume that these days all of this is pretty automatic. I'll have to search for some non-shaped material with a low RMS.
  2. reginalb
    You're correct, of course - I should probably use less sarcasm. But I would say that engineers that fail to use noise shaping are probably likely to come up short in more ways than just that, resulting in a master that isn't going to be great on 24-bit either, unless they're intentionally making the 16-bit worse to serve a narrative.
  3. Darren G
    What amp or DAC do you have at home, or carry with you, that exceeds 20 bits of dynamic range; or call it 21 if you really want to push it before you hit the noise floor of your gear? Even really good gear can only questionably claim this level of performance.

    4-5 bits is potentially significant, but who listens at 100+ db? If you listen at an reasonable level, then those 4 bits get pushed down into the noise, and at that point we are talking 1/1000, 1,10,000, or lets say 1/100,000 thousands level of noise. The sound of a gnat farting in the wind level. Who is really hearing this?
  4. castleofargh Contributor
    Amirm will reject that idea with his argument that we can still get audible cues below noise under the right conditions. we can't really make a point so long as he can argue with extreme circumstances. it's like a guy digging a bomb shelter in his garden. we all know it's too much money and efforts for something "just in case" that may or may not prolong his life a little if ever the day comes. but at the same time we can't claim that a bomb will never fall on our head.
    he's not making stuff up, he's only picking all the worst possible situations, many of which will never happen on the album or with our listening habits. but they could!

    I believe we can still try to be realistic and practical. if the argument is that 16bit forces the master to cut or compress realistic dynamic ranges within an album, how many 24bit albums show more than 90dB of dynamic within the signal? if any exists, and someone happens not to find them horrible and annoying, let him get those in 24bit for when he wishes to listen very loud and have a slightly more hifi experience.
    and for those who want to listen to music as if it was a live show all year long, congrats, you're likely to achieve the same hearing damages as the real professionals on tour. if that's not the experience like you're with the artist, I don't know what is. you guys go ahead and get everything in 24bit to enjoy all the greatness of the silent passages(if they exist on the album). and I hope it's worth it.

    and of course those who just want 24bit albums because it's more, go get them and have fun. no matter if it doesn't sound different or if you only listen at reasonable levels. not everybody has to prove that he needs something to have it if he wants. at some point it was supposed to be a fun hobby. ^_^
    ZenErik, NorCal, HAWX and 3 others like this.
  5. bigshot
    The problem with some audiophiles is that they focus so much on the stuff from the bleeding edge of our ability to hear well into the range of the inaudible, they end up soft peddling the sound we clearly can hear. If they spent as much time refining and perfecting the core as they do fussing with the extremes, they would have perfect sound and wouldn't need to fret so much any more. But I guess sound fidelity really isn't their focus. They're more interested in the theories of the thing than the actual sound of it.
  6. RRod
    For some measured thoughts on the subject, here's what a former AES president and fellow, Grammy and Technical Grammy winner, and decades-long consultant with JBL thought of the state of digital audio back in… 1992:
    HAWX, reginalb and castleofargh like this.
  7. gregorio
    I believe we can make such an argument/point. We can easily make this point with reference to real life, practical application but the difficulty is making the point with science alone, not because science disproves the point but because there isn't, as far as I'm aware, published science which covers all of the required factors.

    To hear beyond the limits of 16bit requires ALL the following to occur simultaneously:
    1. A recording with a noise floor lower than -120dBFS. AND, 2. A DAC, an amp, headphones/speakers and a listening environment, the noise of which COMBINED does not exceed 0dBSPL and which are ALL also capable of >120dSPL peaks. AND finally, 3. The desire to listen to your music at >120dBSPL peaks AND the ability to actually hear a 0dBSPL noise floor after such peak levels.

    1. In theory it wouldn't be impossible to achieve this with a combination of a low noise floor recording environment, an ADC with greater than 120dB dynamic range, the use of mics with a very large dynamic range and very close mic'ing techniques. In practice this does NOT occur though, we require headroom when recording because we do not know in advance what the peaks are going to be, typically this would be anywhere from 6dB to about 18dB. When we normalise the recording, bring the peaks close to 0dBFS, the noise floor of the recording will also obviously increase by around 6-18dB, so even with the very quietest ADC's available, this would still bring our peak to noise floor ratio to within or well within 120dB. Additionally, if we do use mics with a very high dynamic range and close mic'ing then we also always apply significant compression. In real life, as @amirm data does not disprove, the highest peak performance levels occur where we also have the highest noise levels, rock/pop or club gigs for example, resulting in peak to noise level of probably no more than 60dB or so. Where we find the highest peak levels vs noise floor is almost certainly at a symphony concerts, where the audience is actively trying to be as quiet as possible but, we do not use close mic'ing techniques when recording an orchestra, as amirm's quoted photo also demonstrates. We choose mics based on artistic requirements, NOT on dynamic range performance and even that quiet audience and/or quiet musicians are not achieving a 0dBSPL noise floor. We cannot however prove any of this with published science because, AFAIK, science has not published anything in this regard; how do you measure the noise floor of a concert venue with musicians and an audience WHILE the musicians are playing? The mistake being made is to argue purely on what science has published, completely ignore the real life factors science hasn't studied/published and to call the resultant conclusions "real life". It's akin to arguing that the published data on car fuel economy, before the time of more applicable/appropriate methods of measuring fuel usage, was accurate "real life" figures and those arguing against them were ignoring the science. Today, we have other published "scientific" fuel economy figures which are more appropriate because they account for real life factors which the previous tests did not, real roads, air resistance and real driving conditions for example.
    The only potential to encounter such a dynamic range on a recording is after the musicians have stopped playing, for example, during a fade out applied by the engineers at the end of some pieces.

    2. Very rare but possible.

    3. Exceedingly rare to have both and even if there are such people, it's certainly inadvisable as the published science demonstrates that MUSIC peaks above 120dBSPL are potentially dangerous/damaging and that's with a significantly higher than 0dBSPL noise floor!

    That argument holds no water, dynamic ranges are routinely compressed but due to other limiting factors, not the dynamic range limitations of 16bit. The argument has been made that on those almost non-existent occasions where close mic'ing has been employed with very wide dynamic range mics, not to normalise or apply compression. The problem with this argument is that it effectively means not actually mixing or mastering the originally recorded channels or specifically making a recording with huge dynamic range rather than artistry as the goal and for a very tiny number of potential consumers who fulfil all the above requirements and are willing to potentially damage their hearing. There is no precedent for such commercial content, certainly no precedent for such content to make any money and if anyone did make such content, they could potentially open themselves up to law suits!

    It is the recommended practise but it is not always applied. The reason why it's not always applied had been deliberately avoided by amirm, despite being asked a number of times. I know some mastering engineers who do not apply any sort of dither, on the basis that on a particular track/group of tracks they cannot hear any truncation error and that if they can't hear any in their mastering studio with their experienced hearing then it's not going to affect consumers, with their lesser environments and hearing. It is expected that those occasions when the recording is faded out, that it fades out into the consumer's noise floor, not into the dither, noise-shaped dither or truncation noise of the recording. Obviously, we are not accounting for the likes of amirm, people with years of specific training to detect errors/noise, specific equipment to isolate environmental noise and using that equipment in a non-recommended (and potentially damaging) way to make the noise on recordings far more noticeable. It's typically not possible or desirable to account for such an extremity, economics dictate a limited amount of time to create a recording and accounting for such an extremity would grind each of the recording, mixing and mastering phases virtually to a standstill! If we were to account for such people, we could still do so with 16bit, if we chose, which allows us to have a dynamic range of up to 150dB in the critical band. The fact that we rarely choose such a potential dynamic range is a choice, not a limitation of 16bit.

    Amirm has chosen to interpret my OP as a rant against higher than 16bit. That is an INCORRECT interpretation! I've been professionally recording EXCLUSIVELY in greater than 16bit since 1992. My OP was not designed to argue against 24bit, it was designed to expose the myth that 24bit as a consumer distribution format provides higher resolution than 16bit. It doesn't, it provides exactly same resolution but with a theoretically lower noise floor which in practise (real life) is either not employable or discernible ONLY if one were to playback recordings significantly differently to how the artists intended AND/or at potentially dangerous or very dangerous levels!!

  8. vatch
    Awesome stuff Gregorio. Easy to understand and exceedingly well done.
  9. 1800yolk
    my main question from the OP: what does all that extra data end up being then? This is the mind boggling part, everything else makes sense to me.
  10. bigshot
    Recordings of sound you can't hear with human ears. Also sound that doesn't even exist in music.
    NorCal likes this.
  11. 1800yolk
    Thanks! This whole topic got me thinking, though... Say you're using a tube amp. Any and all audio that passes through it will generate "even order harmonics" (I just learned about this stuff lol), which become more prominent the more you crank your amp towards its limits. Those inaudible sounds will still be inaudible, but when all those sounds hit the air, they're going to interact with each other and will very, very subtly change the harmonics of the audible range, and arguably for the better. On the other hand, the amp is going to be doing that with 16/44 stuff, just not quite as much. Obviously factors such as mastering and mixing matter way more, but I think that it isn't completely unreasonable to acquire high fidelity music for this reason. I'm curious about your thoughts though!
  12. 71 dB
    Odd harmonics are generated when you have symmetric non-linearities, in other words positive and negative sides of the signal are treated similarly.
    Even harmonics are generated when you have asymmetric non-linearities, in other words positive and negative sides of the signal are NOT treated similarly.

    Most non-linear devices are a combination of these two types of distortion. Depending on how asymmetric they are they generate more or less odd/even harmonics.

    Tube amps generate also odd harmonics, but even harmonics are dominant. In order to inaudible sounds interact with each other (generate differential frequencies) in the audible frequency range you need non-linearities in the air. Well, acoustic waves are VERY linear at reasonable listening levels and non-linearities start to occur at level such as 160 dB!
    Distortion is distortion whether you like it or not. Hi-fi is about avoiding distortion. Put the pleasing distortions in the source material! There are tons of tube amp simulator plugins available. Use them if it makes the sound better! Sell us the final product, not something we need to finalize ourself with historical amp technology.
  13. gregorio

    If we record at 24bit using a high-end pro ADC (Analogue to Digital Converter), at least the last 3 or 4 bits are just going to be thermal noise generated by the components (resistors, etc.) in the ADC itself. At absolute best (best ADC with nothing plugged into it), we can't have more than about 20-21 bits of material. Of course, having nothing plugged into our ADC and therefore recording nothing other than thermal noise in bits 20-24 isn't going to be a great music product! We're going to have to plug something into our ADC, mics for example. Now though of course, we've got the thermal noise of our ADC + the thermal noise of our mic pre-amps + the thermal noise of our mics + the noise floor of the recording studio and both of these last two items are being amplified several (or many) times by our mic pre-amps! Our 20-21 bit starting point is now considerably lower, typically somewhere around 13bits or so, the rest of the bits just contain noise from all the various sources mentioned (plus usually some other sources as well, such as guitar amps/cabs for example).

    I'm not sure I understand, why would the amp not be doing that quite as much with 16/44 stuff? 0dBFS is the loudest a digital file can be and 0dBFS is exactly the same with 16bit as it is with 24 or any other bit depth. What changes with increased bit depth is the quietest signals we can theoretically record.

  14. SilverEars
    Does anybody have an explaination for this?

    On my DAP(portable digital audio player) which is capable of 32bit using XMOS chip inside.

    I tried the XMOS mode and automatic(which chooses the recording's original bit depth and sampling rate I believe), and automatic just sounds more detailed and XMOS 32bit mode sounds smoothed out in details. Why would that be? Is this result of over-sampling?

    I also notice this with Tidal as well. When I set the streaming mode for which I believe to be WASAPI, the DAC chooses the original bit depth and sampling rate, and it sounds better and more detailed.

    Is this the difference between oversampling vs the original sampling rate? Does over-sampling reduce perception of resolution? Does this have to do with interpolation?
    Last edited: Apr 5, 2018
  15. bigshot
    I'd suspect that something more than just upsampling is involved there.
310 311 312 313 314 315 316 317 318 319
321 322 323 324 325

Share This Page