Head-Fi.org › Forums › Equipment Forums › Sound Science › 24bit vs 16bit, the myth exploded!
New Posts  All Forums:Forum Nav:

24bit vs 16bit, the myth exploded! - Page 62

post #916 of 1510

Soooo, if, as others have mentioned in different threads, they cannot hear a difference between 320 mp3 and .wav and now in this thread people are saying there is no discernible difference between 16 or 24 bit then it stands to reason there is no difference in sound quality between 320mp3 and 24bit studio quality? 

post #917 of 1510
Quote:
Originally Posted by dbbloke View Post

Instruments all have a unique sound (stradivarius etc) and as such, if you record one 16 bit and another 24 bit you will notice the difference in the detail and the fingerprint of the instrument. So you don't need to identical recordings at all.

The question is whether 24 bit is audibly different from 16 bit. In order to answer that question, one needs isolate that difference from all others, otherwise there would be no way to tell which difference was really detected. The only way to do that is, first, to take one 24 bit recording and convert it to 16 bit, and second, to remove all sighted bias (what's the point of testing the strict audibility of something if other senses can instantly reveal the difference?), by the way of an ABX (i.e. double blind) test. Only when those conditions are met and all other differences are eliminated, will you be able to tell with certainty whether 24 bit makes an audible difference or not. And that's not so much science as it is logic and common sense.
post #918 of 1510
Quote:
Originally Posted by mark_h View Post

Soooo, if, as others have mentioned in different threads, they cannot hear a difference between 320 mp3 and .wav and now in this thread people are saying there is no discernible difference between 16 or 24 bit then it stands to reason there is no difference in sound quality between 320mp3 and 24bit studio quality? 

Not at normal listening volumes. High bit rates are important for mixing, when you might need to boost the volume a great deal to bring up a detail in the mix. With 24 bit it comes up clean, but boosting an MP3 that far brings up noise that you can't hear at normal listening levels.
post #919 of 1510

in response to the endless debate about 192k samples, is it better?

 

I suppose that it is pointless to reply to a thread that is this old.  But someone on the internet is wrong!!  http://xkcd.com/386/   and so I just had to say something.  :-)

 

People have persistantly misunderstood the significance of the Nyquist frequency.  He never said that 2x sampe rate gets you a good quality reproduction of the signal...  he said it was the *absolute minimum* that was needed to be able to capture the signal at all. 

 

But at a 2x sample rate what you are doing is turning a sine wave into a square wave.  There is a huge qualitative difference between them.  If you haven't already had that experience then you should go get your self a tone generator and set it to sine and then to square waves at the same frequency and listen to the difference between them, the square waves sound like krap.

 

It is like the difference between DPI (dots per inch) and Pixels.  If you want to print a high quaility representation of a photo then you want a very high dots per inch printer, say 1200, even if the photo itself is only 200 pixels per inch.  That is because it takes lots of dots to do a good job of smoothly representing the pixels.

 

Audio is the same way...  you can argure all you want about the limits of human perception, which does vary greatly from person to person, and is further subject to various biases.  But when it comes to actual wave-form capture, more samples is better.  You will get less distortion in the portrayal of that wave-form.   The only reason they can get away with a ~40k sample rate is they put a low-pass filter that rounds the corners of the square waves (removes higer frequency harmonics).  This is all very fine and well, but it can lead to a reduction over-all in the high end response, it is hard to make a filter that is not also rounding the frequiencies that you do want.

 

bottom line: from a purely technical standpoint, 192k sample rate gives you a better wave form, no if's and's or but's about it.  You don't have to take my word for it, just borrow an o'scope and look for yourself. 

 

But what you do with that wave form (the rest of the components in the audio system) is a totally seperate matter;  as is the question of if your ears are actually sensitive enough to preceive the difference. 

 

Nobody says you have to record higher frequencies with a high sample rate, that is just nonesense, a total strawman.  You record the same frequencies <20k  but at a higher resolution, that's how you get a smoother wave-form with less distortion.

 

I'm really surprised that so many "experts" have got this one wrong.

post #920 of 1510
Quote:
Originally Posted by compsalot View Post

People have persistantly misunderstood the significance of the Nyquist frequency.  He never said that 2x sampe rate gets you a good quality reproduction of the signal...  he said it was the *absolute minimum* that was needed to be able to capture the signal at all.

No, you need a sampling frequency greater than two times the maximum frequency to be able to reconstruct the original signal. The reconstruction is perfect in theory. It seems you don't understand the theorem.

 

 

Quote:
But at a 2x sample rate what you are doing is turning a sine wave into a square wave.

No, that's wrong. See reconstruction.

 

 

Quote:

It is like the difference between DPI (dots per inch) and Pixels.  If you want to print a high quaility representation of a photo then you want a very high dots per inch printer, say 1200, even if the photo itself is only 200 pixels per inch.  That is because it takes lots of dots to do a good job of smoothly representing the pixels.

 

Audio is the same way... 

No, it's not. You don't understand the sampling theorem.

 

 

Quote:
you can argure all you want about the limits of human perception, which does vary greatly from person to person, and is further subject to various biases.  But when it comes to actual wave-form capture, more samples is better.  You will get less distortion in the portrayal of that wave-form.   The only reason they can get away with a ~40k sample rate is they put a low-pass filter that rounds the corners of the square waves (removes higer frequency harmonics).  This is all very fine and well, but it can lead to a reduction over-all in the high end response, it is hard to make a filter that is not also rounding the frequiencies that you do want.

It seems you're talking about ancient non-oversampling DACs.

 

 

Quote:
bottom line: from a purely technical standpoint, 192k sample rate gives you a better wave form, no if's and's or but's about it.  You don't have to take my word for it, just borrow an o'scope and look for yourself.

All DACs (ignoring the ancient nos crap) I know of perform worse at 176.4 or 192 kHz sampling rate.

 

 

Quote:
Nobody says you have to record higher frequencies with a high sample rate, that is just nonesense, a total strawman.  You record the same frequencies <20k  but at a higher resolution, that's how you get a smoother wave-form with less distortion.

 

I'm really surprised that so many "experts" have got this one wrong.

LOL, the irony! You don't get increased resolution. Please read up on the sampling theorem and oversampling DACs. Also see #846.


Edited by xnor - 12/7/12 at 3:11pm
post #921 of 1510

Xnor is right. A simple spin through the relevant wikipedia pages will explain.

post #922 of 1510


 


 

There is and continues to be a lot of confusion on this thread about certain concepts, but I also do appreciate that there are also some people here with a huge depth of knowledge far in excess of my own.


 

Let me translate the above conversation...  see if this makes more sense to you...


 

I said:  from a *purely technical standpoint*  a car that goes 200 mph is faster than a car that can only go 100 mph and this is a fact which is beyond dispute.


 

I also said that just because a car can go 200 mph does not mean that you would or should drive it at that speed.  People seem to be confused by the idea that if a car can go 200 mph they are obligated to drive it at 200 mph, that is a major misunderstanding -- and is basically what prompted me to attempt to add some clarity to this thread. (I did not realize my reply would go at the very end of the conversation instead of following the post to which I was specifically replying; why does this forum have a reply button on each message if the reply does not get associated with that message).


 

I attempted to point out that some people buy cars that go 200 mph because even at much lower speeds they find that the car is more responsive and peppy with smoother acceleration even when operating at the lower speeds.


 

----------


 

Xnor responded to my comments by saying that his car preforms very well when going 100 mph, but if he tries to go 200 mph his tires are not up to the task and he ends up in a ditch.  He then concluded that nobody should drive at 200 mph because it will actually be slower than driving at 100 mph.  He then tried to extend this reasoning to all cars. He further argued that since you are not allowed to go that fast it is pointless to use a car that can.


 

I concede this point to xnor, that if you end up in a ditch and have to wait for the tow truck then certainly you will not see a net gain in performance by driving at 200 mph because certainly waiting for the tow truck to come, will take a lot more time than if you had proceeded at 100 mph.  e.g. the distortion will kill you.


 

Further I think he makes a very good point that a lot of the low quality results people complain about is the result of equipment that does not live up to it's specs and does not actually deliver quality performance at 200 mph.  But I also contend that some cars are designed better and those cars do deliver quality at 200 mph.


 


 

However, we all agree that the actual legal speed limit is 60 mph, so nobody is actually going to be driving faster than 60 mph, although there is a lot of debate about the fact that in some places such as the Nevada freeway it is legal to drive at 80 mph (or so I've heard). 


 

But this does not change the fact that a car which has more capacity, is going to be more responsive even when driven at a slower speed.  and this might just be enough of a difference that some people will want to pay for that extra measure of quality even though it does not change the fact that you are still only going to get there at 60 mph, the ride just feels a little bit smoother...  and some people appreciate this difference. 


 

I would even agree that the law of diminishing returns is such that the additional performance difference which results from the extra capacity of a 200 mph car versus the extra capacity of a 100 mph car is likely to be quite small - when both are being driven at 60 mph; but I do not accept that the difference is zero, which is what many people contend.


 

However, I concede that by the time you down sample to 44.1 CD, it's probable that the *effective* difference is zero.  On the other hand if your target is Blu-ray you do not have to degrade your final mix.  Whether or not someone's stereo+ears can actually discern the difference is indeterminate.  But one should not be too quick to dismiss the psychological satisfaction that people get from having the *best* quality specs, because that satisfaction can also be a part of the total experience, otherwise why would people put so much effort into case design?  which has no impact on the sound at all, but does contribute greatly to the total experience.


 

A similar technical argument applies to the 24 bit versus 16 bit.  At 24 bits your step size is smaller, your transitions are smoother.  Is this a difference that anyone can actually hear?  that remains an open question, but one thing is certain, at 16 bits it takes effort to avoid clipping, but at 24 bits, it is pretty much a no-brainer that you can both avoid clipping and retain smooth gradations.


 

For an experiment, find an example photo to view, and set your color mode for 16 bit, now set your color mode for 24 bit, can you tell the difference?  Answer: Only if you have a high quality monitor and good visual acuity and the photo itself encompasses a wide gamut.  Many monitors are not of high enough quality that they can display the difference.  Many peoples eyes are not sufficiently skilled (yes color perception can be learned) that they can discern the difference (which is why monitor manufactures get away with lower quality displays while claiming to be 24 bit).


 

One final thought, I have also seen a lot of comments expressing concern for the amount of disk space required...  once again I think there is some confusion about this issue. We now live in an era of one terabyte disk drives which can be purchased for less than $100.  This means that a gigabyte now costs ten cents.  One gigabyte will hold about ten hours of 24 bit stereo at 192k samples per sec of uncompressed audio (this figure is conservative, it will actually hold more). Or in other words, the recording will cost you about one penny per hour for storage (five cents if you keep proper backups). Anybody who feels that this is too expensive.... er, well, I rest my case.

 

(update: I was half-asleep when I wrote this and got the calculation wrong, the correct amount for 192k is 60 cents per hour if you save it to DVD with 2 backups, see the correction in the message below.  At 96k your cost savings would be 50% or 30 cents per hour.  If either of those amounts is a worry to you then this cost is the least of your concerns).


-------------


 

I probably never should have waded into this conversation, I see that it has already gone on for years and nearly 1000 comments (I've read about half), with nothing much appearing to be resolved. It was just that I saw some major confusion happening in a lot of the comments and mistakenly thought that I might be able to add something of value here.  But I see that my notion was foolish and am now going to bow out as gracefully as I can manage.


 

For optimal productivity, I highly recommend reading http://xkcd.com   it makes more sense than does further discourse on this thread.


Edited by compsalot - 12/8/12 at 10:02am
post #923 of 1510
Quote:
Originally Posted by compsalot View Post

One gigabyte will hold about ten hours of 24 bit stereo at 192k samples per sec of uncompressed audio (this figure is conservative, it will actually hold more)

You can't even count.

24 bits * 192,000 samples per second * 2 channels = 9,216,000 bits per second = 1,152,000 bytes per second

1 gigabyte (GB) = 10^9 = 1,000,000,000 bytes
1 gibibyte (GiB) = 2^30 = 1,073,741,824 bytes

1 GB will hold: (1,000,000,000 / 1,152,000) = 868 seconds = 14 min 28 sec
1 GiB will hold: (1,073,741,824 / 1,152,000) = 932 seconds = 15 min 32 sec

1 terabyte (TB) = 10^12 = 1,000,000,000,000 bytes
1 tebibyte (TiB) = 2^40 = 1,099,511,627,776 bytes

1 TB will hold: (1,000,000,000,000 / 1,152,000) = 868,055 seconds = 241 h 7 min 35 sec
1 TiB will hold: (1,099,511,627,776 / 1,152,000) = 954,437 seconds = 265 h 7 min 17 sec

BTW, you totally lost me with the car analogy.

Also, this debate is kinda pointless, because ultimately, proof for audibility has to come from ABX tests (what else is there?), which audiophiles generally don't go through, and when they do and fail miserably, they just resort to denying the validity of such tests. They haven't come up with anything better, either. So this can go round and round until everyone's too exhausted to continue.
post #924 of 1510

 

oops, I was peacefully soaking the the tub and realized I had miscalculated the storage cost... that's what I get for doing it on the fly.

so here is how it is done for anyone who is interested... and hasn't already figured it out.

 

1 terabyte = 1000 gigabytes for a cost of $100. So $100/1000 gives us ten cents per gigabyte for our storage cost.

 

rounding up to account for overhead, we get 200k samples per second * 3 bytes per sample (24 bits) = 0.6 megabytes per second

since we are assuming stereo we go 2 * 0.6 = 1.2 megabytes per second.

 

1.2 megabytes per second * 60 seconds per minute * 60 minutes per hour = 4.3 gigabytes

4.3 gigabytes * 10 cents per gigabyte = 43 cents per hour

 

And 43 cents per hour is dirt cheap by any measure. Also if you save it to DVD, they cost about 20 cents these days for about 4.5 gigs of storage

 

these are ballpark numbers which are good enough for budgetary purposes.  specific values depend on the file system used to format the disk drive etc.

 

 

P.S.  I won't be making any further responses to this thread, as skamp says, it it pointless to continue to discuss this.


Edited by compsalot - 12/8/12 at 3:26am
post #925 of 1510
Quote:

I probably never should have waded into this conversation, I see that it has already gone on for years and nearly 1000 comments (I've read about half), with nothing much appearing to be resolved. It was just that I saw some major confusion happening in a lot of the comments and mistakenly thought that I might be able to add something of value here.  But I see that my notion was foolish and am now going to bow out as gracefully as I can manage.

I understand you're trying to help but tbh you just add to the confusion.

 

Quote:
Originally Posted by compsalot View Post

I said:  from a *purely technical standpoint*  a car that goes 200 mph is faster than a car that can only go 100 mph and this is a fact which is beyond dispute.


 

I also said that just because a car can go 200 mph does not mean that you would or should drive it at that speed.  People seem to be confused by the idea that if a car can go 200 mph they are obligated to drive it at 200 mph, that is a major misunderstanding -- and is basically what prompted me to attempt to add some clarity to this thread. (I did not realize my reply would go at the very end of the conversation instead of following the post to which I was specifically replying; why does this forum have a reply button on each message if the reply does not get associated with that message).

I don't think that people are confused about that. If you want you can store a signal that is limited to fmax = 10 Hz using a 192 kHz sampling rate. It's an utter waste of space, but of course it is possible.

 

 

Quote:

I attempted to point out that some people buy cars that go 200 mph because even at much lower speeds they find that the car is more responsive and peppy with smoother acceleration even when operating at the lower speeds.

The analogy doesn't work. To make proper analogies you first have to understand the theorem.

Think of two cars that are identical, but one is limited to 200 mph and the other one to 100 mph. If you drive at 100 mph your ride will be smoother and quieter. The analogy is still flawed though.

 

I have no idea what you're trying to say with the further points like the ditch and tow truck.

 

Quote:
A similar technical argument applies to the 24 bit versus 16 bit.  At 24 bits your step size is smaller, your transitions are smoother.  Is this a difference that anyone can actually hear?  that remains an open question, but one thing is certain, at 16 bits it takes effort to avoid clipping, but at 24 bits, it is pretty much a no-brainer that you can both avoid clipping and retain smooth gradations.

Using terms like "transitions" and "gradations", have you even read the first post?

 

 

Quote:

For an experiment, find an example photo to view, and set your color mode for 16 bit, now set your color mode for 24 bit, can you tell the difference?  Answer: Only if you have a high quality monitor and good visual acuity and the photo itself encompasses a wide gamut.  Many monitors are not of high enough quality that they can display the difference.  Many peoples eyes are not sufficiently skilled (yes color perception can be learned) that they can discern the difference (which is why monitor manufactures get away with lower quality displays while claiming to be 24 bit).

Another flawed analogy. 16 vs. 24 bit colors show a clearly visible difference even on an average monitor. After all, the human eye can discern about 10,000,000 colors. Audio is different though.

There's no simple comparison of digital images and audio. You can see the color of a single pixel but a single sample always just sounds like a click. In order to hear a tone you need several evenly-spaced samples, which is also why small quantization errors do not matter...


Edited by xnor - 12/8/12 at 10:23am
post #926 of 1510
I'm sorry compsalot, but you are flat out wrong. I suggested it before, and I'll suggest it again. Reading a bit on the relevant wikipedia pages will clear up a lot of your misconceptions.

The difference between 16 bit and 24 bit is resolution at extremely low volume levels. The dynamic range improvement extends downward. Up in normal listening levels, 16 and 24 sound identical.
Edited by bigshot - 12/8/12 at 9:59am
post #927 of 1510

Well, unless this isn't PCM. In A-law, u-law or similar companding formats, this would increase precision, but nobody who cares about sound quality really uses those.

Heck, 16-bit A-law would probably be better than 16-bit PCM, bit for bit, especially in current loudness war campaings.

 

That said, properly implemented 16-bit PCM is more than enough, the quantization error is quiet enough too.


Edited by AstralStorm - 12/9/12 at 3:27am
post #928 of 1510

There are 3 DIFFERENT things...

 

1) What is the SPEC?

 

2) What is the performance of a SPECIFIC DEVICE that ATTEMPTS to implement that spec?

 

3) What is the human ear capable of perceiving?


 

 

What I observed throughout this entire 900+ thread is that people keep mixing those 3 things up as if they were interchangeable...  they aren't!  And that is the confused muddle that I was trying to address.

 

To claim that 192k sounds worse than 96k is ludicrous as long as you are talking about the SPEC.  It's the same as saying that a car which goes 200 mph is SLOWER than a car that goes 100 mph.  It's a totally absurd claim.  To insist that you must record ultrasonics simply because you can record ultrasonics is equally silly.

 

Now on the other hand if you want to say that specific device XYZ sounds like krap when you try to use it at 192k but sounds good at 96k, you will get no disagreement from me.  But to debate that the SPEC for 192k delivers worse sound then does the SPEC for 96k, this is nonsense.

 

I'm willing to accept your claim about the ear being unable to tell the difference. I know nothing about you, but shall presume that your experience in this field grants you the ability to make that claim. 

 

But I am also inclined to defer to the wisdom of the engineers who did extensive research when they designed the Blu-ray spec. They seemed to think it can make a difference otherwise they would not have bothered to include support for 192k in their spec.

 

-------

As far as 24 bits vs 16 bits, it is pointless, nay it is impossible to discuss this as long as people insist that more bits can ONLY mean LOUDER bits rather than being set to produce an equal volume range at a finer gradation; or some combination of both louder and finer (say 4 bits for each); which is a design decision of a specific implementation of the output stage.

 

----------

As far as color perception goes, perhaps I can offer you a couple of data points... Because, yes, color perception is a reasonable parallel to audio perception.

 

My current laptop has a krap monitor...  it's terribly disappointing considering that the company that makes it, made their reputation on the quality of the monitors on their top of the line laptops.  In all fairness though, I bought the ~budget~ model (which is still twice as expensive as other brands with similar specs), in the expectation that some of that color monitor goodness of the top end would percolate down to their lower end, it did not.... 

 

If you take the time to do the research you will find that most laptop screens do in fact cheat...  they are physically incapable of delivering 24 bit color, 16 to 18 bit is probably a more realistic physical maximum.  (desktop monitors are generally better than laptops, but go find yourself an old CRT and see how many millions of colors it cannot deliver.  while you are at it don't forget to consider the quality of the video card itself -- we are dealing with systems consisting of multiple components of which the human is only one of those components, the quality/ability of each component in that system will affect the outcome).
 

------------------

Next data point:

 

It's all very fine and well to assume that ~everybody~ can easily perceive millions of colors, but my experience would strongly indicate otherwise.

 

Back in the era before digital, I used to be a semi-pro photographer.  I got fed up with the commercial labs inability to deliver the results I wanted, so I built my own color darkroom and taught myself how to use it. 

 

Color darkroom exposure uses a measurement called a CC = Color Correction unit.    When I started I could barely tell the difference between 30 CCs of exposure.  I had read that people who were really good at it could tell the difference between 10 CCs of exposure.  After many hundreds of hours spent in the darkroom making prints, I eventually found that I could tell the difference between 5 CCs of exposure. So, in my experience, perception can be a learned skill -- I'm sure this applies to audio as well.... also wine tasting, etc..

 

Now here is the catch...  I would spend hours in the darkroom agonizing over the smallest changes.  But when I exhibited my work, I found that the average person could not even perceive the subtle differences, that I had felt to be so important, and spent so much time perfecting.

 

So, if people want to argue that most ears can't tell the difference in the sound, then fine, knock yourself out, you are probably right.  If people want to claim that device XYZ does a lousy job of recording at 192k, I won't disagree with that either. But if people want to argue that the SPEC for 192k delivers WORSE sound than the SPEC for 96k.....   well, that is where I was foolish enough to stick my hand into this chainsaw of a forum and attempt -- apparently poorly -- to convey something different.


 

I now choose to withdraw my hand from this churning tar-pit of inequity  ;-)


Edited by compsalot - 12/10/12 at 6:00pm
post #929 of 1510
Quote:
Originally Posted by compsalot View Post

To claim that 192k sounds worse than 96k is ludicrous as long as you are talking about the SPEC.

That's why I was specifically talking about DAC chips.

 

Quote:
As far as 24 bits vs 16 bits, it is pointless, nay it is impossible to discuss this as long as people insist that more bits can ONLY mean LOUDER bits rather than being set to produce an equal volume range at a finer gradation; or some combination of both louder and finer (say 4 bits for each); which is a design decision of a specific implementation of the output stage.

It is impossible to discuss this with people that do not understand the sampling theorem, quantization and dithering. Please read the first post.

 

You're talking about "equal volume range". What's that range? What's the math behind the number you come up with?

 

Hint: more bits = higher dynamic range

 

Quote:
My current laptop has a krap monitor... 

Set a wallpaper with smooth gradients on your desktop and switch from 32 bit to 16 bit colors. Do you really not see the difference?

 

Quote:
But if people want to argue that the SPEC for 192k delivers WORSE sound than the SPEC for 96k.....   well, that is where I was foolish enough to stick my hand into this chainsaw of a forum and attempt -- apparently poorly -- to convey something different.

Afaik nobody argued that.


Edited by xnor - 12/10/12 at 6:24pm
post #930 of 1510

You can assume that whenever I use the phrase "sounds like", I am referring to human hearing with ears. They're the only things I have to hear sounds with.

 

When I talk about high bitrate audio, I am talking about a recording format with a higher dynamic range. Resolution at normal listening volumes is identical, both to human ears and to spec. All of the added resolution is down below the range your ears can hear. Handy for mixes when you need to bring up something very quiet, but for listening, it's useless as teats on a bull hog.

New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Sound Science
Head-Fi.org › Forums › Equipment Forums › Sound Science › 24bit vs 16bit, the myth exploded!