24bit vs 16bit, the myth exploded!
Aug 19, 2009 at 4:19 PM Post #406 of 7,175
Quote:

Originally Posted by leeperry /img/forum/go_quote.gif
oh ok..so it's not 16bit either
confused.gif
as HDCD uses the 2 bottom bits of CDDA infos to encode HDCD 20bit dithered data AFAIK...and also the audio is +6dB louder than when it's properly HDCD decoded.

so this indeed seems to be far from a 16/24bit genuine comparison?! it'd appear to actually be a 14/24 bit comparisons w/ different masterings....so please keep us posted whenever you will be able to provide us w/ genuine 16/24bit files dithered from the same exact 32float source w/ the same exact mastering/dithering algorithms.

that's what annoys the OP of this thread, companies push 24bit....but the mastering is much different, and in that specific case it's even worse because your 16bit file has been HDCD encoded so it's not even true 16bit
redface.gif



From a technical perspective, I agree this is not accurate. But from a practical perspective, this is really what happens between a 16-bit and 24-bit version of a recording in the real world. Things are recorded and mixed at 24-bit. But when a 16-bit version is released, compression or limiting is used, which effectively removes the top 8 bits. When 24-bit releases are done, the top 8 bits are left intact, so what is given is a recording with more dynamic range.

What you are asking for is either a 16-bit recording mastered without compression, or a 24-bit recording that is compressed to 16-bit standards. Neither of these happens in the real world. For a technical comparison between the two formats I understand why you're asking. But if you want a comparison between what is released in 16-bit and what is released in 24-bit, that's exactly what is being given.
 
Aug 19, 2009 at 4:27 PM Post #407 of 7,175
...so we'll never be able to compare 16/24bit properly.

I've got some mind blowing 24/96 5.1 lossless recordings, but god knows whether they sound "better" than CDDA due to higher standards mastering or increased resolution?

so it's like cable debates, there will never be a final answer...so let ppl believe what they like, what matters in the end is end users enjoyment after all
smily_headphones1.gif


at some point I had some 32float masters that I got from Cubase VST, and 16bit sounded much less refined than 24bit...but then the OP made lenghtly explanations to prove us that 24bit is m00t, so I blamed it on the lousy SRC/dithering algorithms(UV22HR). ah well
rolleyes.gif


I tend to believe that a properly dithered CDDA can almost sound as good as 24/96..I've got a MFSL remastered CD from Jean-Michel Jarre that just sounds too good to be true, you wouldn't believe that it's silly 16bit
biggrin.gif


the same way properly dithered RGB24 can look almost as good as RGB36
 
Aug 19, 2009 at 4:39 PM Post #408 of 7,175
The whole idea of OP, if I understanding from reading this entire thread, is that the original source cannot record anything higher than 20khz. There is no mic that can capture that, and our hearing capability is not beyond 60db in most cases. The dynamic range that is offered by 24bit is really not important factor since 16 bit has enough room to destroy our ears.

24bit/96 or even 192khz is good for mixing and engineering process for extra rooms. Similar to why we are using the raw format in digital photos, but since the output cannot be reproduced, there really is no point of using 24/96+ in the actual listening format.

For those who listens very loud to a point where it will destroy your ear in a short period of time, maybe having that extra 24bit might help a bit. But those like me who listens at a comfortable volume level will find the 16/44.1 perform exactly same as 24/96+.

Isn't this what OP is saying?

p.s.: is f2k with dithering the best way to down sample 24/96+ materials to 16/44.1?
 
Aug 19, 2009 at 4:49 PM Post #409 of 7,175
Quote:

Originally Posted by tosehee /img/forum/go_quote.gif
24bit/96 or even 192khz is good for mixing and engineering process for extra rooms. Similar to why we are using the raw format in digital photos, but since the output cannot be reproduced, there really is no point of using 24/96+ in the actual listening format.

<snip>

Isn't this what OP is saying?



Yes, I think that is what he is trying to say. But from a practical perspective (rather than a purely technical one), 24-bit releases have 8 more bits of dynamic range. And those 8 bits are on the top, not the bottom. What he is saying is "if 16-bit releases were the same as 24-bit releases, they would sound the same." But the fact is, they aren't the same. If a studio released a 16-bit CD with 8 bits of extra dynamic range, it would be 48 Db or so quieter than all of your other CD's. People would ask "why is this broken?". (And it would be, because only the bottom 8 bits, or 6 bits after dithering [bits 3-8], would contain any music other than transients.) Therefore, the only releases with 24 bits of dynamic range are 24-bit releases.
 
Aug 19, 2009 at 5:13 PM Post #410 of 7,175
Quote:

Originally Posted by barleyguy /img/forum/go_quote.gif
Yes, I think that is what he is trying to say. But from a practical perspective (rather than a purely technical one), 24-bit releases have 8 more bits of dynamic range. And those 8 bits are on the top, not the bottom. What he is saying is "if 16-bit releases were the same as 24-bit releases, they would sound the same." But the fact is, they aren't the same. If a studio released a 16-bit CD with 8 bits of extra dynamic range, it would be 48 Db or so quieter than all of your other CD's. People would ask "why is this broken?". (And it would be, because only the bottom 8 bits, or 6 bits after dithering [bits 3-8], would contain any music other than transients.) Therefore, the only releases with 24 bits of dynamic range are 24-bit releases.


Yes. But that 8 bit more of dynamic range is not within the domain of our hearing capability. Isn't that what OP is saying? What am I missing here?
 
Aug 19, 2009 at 5:16 PM Post #411 of 7,175
Quote:

Originally Posted by tosehee /img/forum/go_quote.gif
Yes. But that 8 bit more of dynamic range is not within the domain of our hearing capability. Isn't that what OP is saying? What am I missing here?


If those 8 bits were on the bottom (least significant), they would be out of our hearing capability. But they're not. They're on the top, and they get removed with compression or limiting. (Side note: Limiting==fast, compression==slow.) So it does in fact sound different, because of the compression or limiting.
 
Aug 19, 2009 at 5:30 PM Post #412 of 7,175
Quote:

Originally Posted by barleyguy /img/forum/go_quote.gif
If those 8 bits were on the bottom (least significant), they would be out of our hearing capability. But they're not. They're on the top, and they get removed with compression or limiting. (Side note: Limiting==fast, compression==slow.) So it does in fact sound different, because of the compression or limiting.


I read it as the upper limit is also not audible after 60db+ unless you wanna destroy your ears.
 
Aug 19, 2009 at 5:37 PM Post #413 of 7,175
So either way, would a vinyl rip to flac downverted to 16 sound identical on a higher end dac?
 
Aug 19, 2009 at 5:38 PM Post #414 of 7,175
Quote:

Originally Posted by tosehee /img/forum/go_quote.gif
I read it as the upper limit is also not audible after 60db+ unless you wanna destroy your ears.


I don't think you're understanding what I'm saying....

When 16-bit audio is produced, it is generally recorded and mixed at 24-bit. When the conversion to 16-bit is done, it's not "let's do a straight conversion with dither, and remove the bottom 8 bits." It's "let's compress this until it sounds louder on a 16-bit medium, and remove the top 8 bits." So there are actual transients lost, and air and soundstage lost, and possibly artifacts from the compression.

What is being asked as far as I can tell is, "if we did a straight conversion from 24-bit to 16-bit (removing the bottom 8 bits), would it sound the same?" The fact is, no commercial release of music is ever done that way. The standards and expectations for the two formats are different. So they sound different.

EDIT: Another way to explain this is: You can't just throw away the top bits. If you do, you'll get a nasty clipping sound. You have to do something with them, and no matter what approach is taken, something is lost.

Quote:

Originally Posted by DoYouRight /img/forum/go_quote.gif
So either way, would a vinyl rip to flac downverted to 16 sound identical on a higher end dac?


It depends how you do the downconversion. That's my whole point. If you record it into a 24-bit ADC, you can choose (during the conversion) to leave the dynamics intact and have it be really quiet compared to other 16 bit recordings, or you can do some sort of dynamic range compression (or simply normalization). The second choice may even be the best, because the original vinyl may not have enough dynamic range to need 24 bits.
 
Aug 19, 2009 at 6:05 PM Post #415 of 7,175
I think vynil is a different story because the more your quantize the crackles, the worst they will sound...but again it's source dependend, a very high quality vynil rip won't have hardly any audible crackles. but top notch vynil equipment costs an arm and a leg, plus you need a mint unplayed copy...and also need to clean it thoroughly, if it's been stocked for ages.
 
Aug 19, 2009 at 6:16 PM Post #416 of 7,175
Quote:

Originally Posted by barleyguy /img/forum/go_quote.gif
I don't think you're understanding what I'm saying....

When 16-bit audio is produced, it is generally recorded and mixed at 24-bit. When the conversion to 16-bit is done, it's not "let's do a straight conversion with dither, and remove the bottom 8 bits." It's "let's compress this until it sounds louder on a 16-bit medium, and remove the top 8 bits." So there are actual transients lost, and air and soundstage lost, and possibly artifacts from the compression.

What is being asked as far as I can tell is, "if we did a straight conversion from 24-bit to 16-bit (removing the bottom 8 bits), would it sound the same?" The fact is, no commercial release of music is ever done that way. The standards and expectations for the two formats are different. So they sound different.

EDIT: Another way to explain this is: You can't just throw away the top bits. If you do, you'll get a nasty clipping sound. You have to do something with them, and no matter what approach is taken, something is lost.



I don't know if you understand what's going on.

A single sample usually is represented by a floating point number in the range [0; 1].
(0.0 is silence, 1.0 is clipping)
Floats normally have 32 bits, so they allow a pretty fine resolution.

Reducing the number of bits to represent these sample values just reduces the resolution (in other words: the distance between two consecutive numbers increases).

As the OP stated, this doesn't make the sound worse, since the original waveform can be restored perfectly using dithering.

You also say "let's compress this until it sounds louder on a 16-bit medium, and remove the top 8 bits.". Isn't that sheer nonsense? Why should there be a change in volume? (remember: the dynamic range changes and dynamic range != music volume)
Why on earth would somebody make the samples much louder and cut off upper bits? That would result in one big clipping mess.
 
Aug 19, 2009 at 7:27 PM Post #417 of 7,175
Quote:

Originally Posted by xnor /img/forum/go_quote.gif
I don't know if you understand what's going on.

A single sample usually is represented by a floating point number in the range [0; 1].
(0.0 is silence, 1.0 is clipping)
Floats normally have 32 bits, so they allow a pretty fine resolution.

Reducing the number of bits to represent these sample values just reduces the resolution (in other words: the distance between two consecutive numbers increases).

As the OP stated, this doesn't make the sound worse, since the original waveform can be restored perfectly using dithering.

You also say "let's compress this until it sounds louder on a 16-bit medium, and remove the top 8 bits.". Isn't that sheer nonsense? Why should there be a change in volume? (remember: the dynamic range changes and dynamic range != music volume)
Why on earth would somebody make the samples much louder and cut off upper bits? That would result in one big clipping mess.



I didn't say "cut off". I said "remove" and also "compress and limit". Which really is what is done in the conversion.

This all has to do with the history of the 16-bit medium, and the recent history of the 24-bit medium. When CD's were released, they were mostly conversions from analog recordings. They had a relative volume and dynamic range compression similar to that analog medium. Then, as CD's became mainstream, there was a "loudness war", where producers and record companies compressed the crap out of everything to get the "loudest" recordings. (Of course the actual volume is controlled by the volume knob, but still, there was a loudness war. As in, make a particular CD sound louder at the same knob setting as another CD.) Thus, the standards and expectations for 16-bit recordings entail having a certain level, which is only about 10 Db RMS from full scale.

24-bit recordings are typically audiophile targeted, and so they have that headroom and dynamic range restored closer to the original studio recording. A recording with more headroom has more room for transients and soundstage, so it should inherently sound better.

I understand perfectly what you're saying. What I'm saying is that you won't see any music that is released like that, because the two formats have different expectations.
 
Aug 19, 2009 at 7:48 PM Post #418 of 7,175
@barleyguy: Ok, I misunderstood you there.
But still, what the OP said is true. You cannot hear the difference between a 24bit and 16bit file, except something has been "done" to it.

So these 24-bit recordings are just another way to make more money..
Guess they are selling expensive 32-bit recordings in a couple of years hehehe
tongue.gif
 
Aug 19, 2009 at 7:50 PM Post #419 of 7,175
Quote:

Originally Posted by DoYouRight /img/forum/go_quote.gif
So either way, would a vinyl rip to flac downverted to 16 sound identical on a higher end dac?


Basically, yes.


Quote:

Originally Posted by barleyguy /img/forum/go_quote.gif
It depends how you do the downconversion. That's my whole point. If you record it into a 24-bit ADC, you can choose (during the conversion) to leave the dynamics intact and have it be really quiet compared to other 16 bit recordings, or you can do some sort of dynamic range compression (or simply normalization). The second choice may even be the best, because the original vinyl may not have enough dynamic range to need 24 bits.


No, it isn't going to be really quiet?!
Second choice is evil and what record companies are doing. (which is very sad)
 
Aug 19, 2009 at 7:57 PM Post #420 of 7,175
Quote:

Originally Posted by xnor /img/forum/go_quote.gif
@barleyguy: Ok, I misunderstood you there.
But still, what the OP said is true. You cannot hear the difference between a 24bit and 16bit file, except something has been "done" to it.

So these 24-bit recordings are just another way to make more money..



Yes, that true. But, it's not some big conspiracy. It is about giving people what they expect. If you released a 16-bit recording with the proper amount of headroom, it would be quieter at the same volume setting as everything else in someone's collection. With 24-bit, the standard has been to give that headroom back resulting in a better sounding recording.

Often the market has more to do with giving people what they expect than giving them what's best. People in general don't understand things well enough to know what's best.

The big question behind this topic seems to be "Do 24-bit recordings sound better than 16-bit recordings?"

Yes, they do. But not for the reason you would expect. They sound better because they are mastered differently.
 

Users who are viewing this thread

Back
Top