Head-Fi.org › Forums › Equipment Forums › Sound Science › 24bit vs 16bit, the myth exploded!
New Posts  All Forums:Forum Nav:

24bit vs 16bit, the myth exploded! - Page 122

post #1816 of 1923
Quote:
Originally Posted by cregster View Post
 

Read your 16 vs 24 article.  I think you misunderstand sound and "real life."

The whole 16 vs 24 debate is similar to "is 24 bit color better than 16 bit?"  in photos.  It is fairly obvious that 24 bit is better.  It is not as easy to manipulate someone into believing they can't see something as it is to convince them they can't hear something.

 

Here is real life.  Take two colors of blue, very similar in shade, but not the same.  There are an INFINITE number of gradations in the transition of color from the first blue to the second.  And every one of those transition colors exists.  

 

It is the same in sound. 24 bit is better, not because it can get something at the two ends of the spectrum, but because it can better get what is in between.  24 bit can capture far more of those real life gradations of the many tones that make up even one instrument sound.  Therefore, 24 bit is more true to actual life in the same way a 24 bit color photo is much more true to life than a 16 bit photo, even though you can see the image very well with 16 bit.  

 

There are also many emotional nuances put into the music by the players. These can also be more fully described because there are also an infinite number of gradations in pressure, pluck, etc. from one to another.

 

So, measuring the frequencies and the dynamic range etc.  completely misses the point.

 

ON the practical side, I have both the 16 bit remastered Beatles CD's and the 24 bit "Apple" usb version.  When I play tunes randomly,  while working, etc.,  and it is a mix of Beatles stuff, other artists,  I can always tell when a 24 bit Beatles tune comes up.  When I occasionally check it on the device, sure enough, it is the 24 bit version.  Every time.

 

And, even without comparison, I know listening to a 24 bit 192k orchestral recording,  it is not even close to the same 16 bit 44k.  It is obvious.  Not subtle.

 

Just this very odd thing that gets pushed in sound--24 bit is better than 16 bit in every use of bits (machine running, CPU, photography, robotics, cars sensing the road and on and on)  except just this ONE area,  sound. Very strange.

The ears work with a mechanism different to that of an eye. The way the ears handle information is vastly different so it is not as simple as increasing resolution. It can only go up to a point and beyond that it will not be beneficial and may even produce more distortion. The real question here is whether you can perceive a difference in audio quality. The ears can only ever hear such a small range of frequencies, add to that it diminishes through age.

To make this simple, take a square. You only ever need to know that a square contains four points which make up the corners. To draw a square you can just do so by first drawing four points that will become the corners, and draw a straight line from point to point to create that square. Increasing the number of points is useless, because you will still end up with a perfect square even if you allocate a trillion points between the corner points. Unless of course if you are talking about say a circle. But in this case, the square can easily be represented through those four points and more points are unnecessary. The same can be said for audio. You can only ever have so much to represent the sound. More than that and you will not gain any difference.

Also mind you that we are humans by nature, we're not robots with objectivity in mind. Apart from our sensory organs, we do have this "DSP chip", our brains, that processes all information we perceive. The "algorithms" that our "DSP chip" are gained through our own individual experiences, therefore how one person hears will differ substantially from person to person. In this case, the increase in the value of a number(Bit and sample rate) may initiate a response in our "DSP chip" to process that sound heard to be "better" than one that is of a lower value. This is nothing more than psychoacoustics at work here.

Again, we are emotional, psychological beings that work with all of our senses active at a given time, and all these senses will input information that gets sent to the brain. So sound itself will not be objectively scrutinized by our judgement, everything else like the genre of the song, the location of where you are listening to the song and even your mood at the time you are listening may generate differences that does not originate from the source itself.

Just my two cents.

post #1817 of 1923
Quote:
Originally Posted by Lespectraal View Post
 

The ears work with a mechanism different to that of an eye. The way the ears handle information is vastly different so it is not as simple as increasing resolution.

 

Actually, vision also has a finite resolution (which is why microscopes are useful, for example), and the ability to perceive different shades of colors is also limited. Commonly used monitor resolutions like 1920x1080 are just not enough to reach the limit yet at a typical viewing distance/FOV, but 8 bits per channel is about right when no further processing is needed.

 

The discrete "steps" of 16-bit quantized audio can be turned into uncorrelated noise (hiss) with dithering. As long as this noise is not audible, the limited "resolution" of the samples is not a problem.

post #1818 of 1923

There are thresholds for everything... there is a frame rate for films that exceeds the "flicker threshold". There's resolution for video that exceeds the ability to see from a normal viewing distance. There's a resolution threshold for images that we can't see beyond without using magnifying glasses. And there is a threshold for recorded music. Redbook exceeds it by a little bit. Everything beyond that is overkill.

post #1819 of 1923
Quote:
Originally Posted by cregster View Post
 

Read your 16 vs 24 article.  I think you misunderstand sound and "real life."

The whole 16 vs 24 debate is similar to "is 24 bit color better than 16 bit?"  in photos.  It is fairly obvious that 24 bit is better.  It is not as easy to manipulate someone into believing they can't see something as it is to convince them they can't hear something.

 

Here is real life.  Take two colors of blue, very similar in shade, but not the same.  There are an INFINITE number of gradations in the transition of color from the first blue to the second.  And every one of those transition colors exists.  

 

It is the same in sound. 24 bit is better, not because it can get something at the two ends of the spectrum, but because it can better get what is in between.  24 bit can capture far more of those real life gradations of the many tones that make up even one instrument sound.  Therefore, 24 bit is more true to actual life in the same way a 24 bit color photo is much more true to life than a 16 bit photo, even though you can see the image very well with 16 bit.  

 

There are also many emotional nuances put into the music by the players. These can also be more fully described because there are also an infinite number of gradations in pressure, pluck, etc. from one to another.

 

So, measuring the frequencies and the dynamic range etc.  completely misses the point.

 

ON the practical side, I have both the 16 bit remastered Beatles CD's and the 24 bit "Apple" usb version.  When I play tunes randomly,  while working, etc.,  and it is a mix of Beatles stuff, other artists,  I can always tell when a 24 bit Beatles tune comes up.  When I occasionally check it on the device, sure enough, it is the 24 bit version.  Every time.

 

And, even without comparison, I know listening to a 24 bit 192k orchestral recording,  it is not even close to the same 16 bit 44k.  It is obvious.  Not subtle.

 

Just this very odd thing that gets pushed in sound--24 bit is better than 16 bit in every use of bits (machine running, CPU, photography, robotics, cars sensing the road and on and on)  except just this ONE area,  sound. Very strange.


except that your example is wrong. because both happen to use bits, is only telling us about the fact that it's digital, what they are used for is completely different. increased bit depth in photo brings more colors, increased bit depth in sound bring sounds below the already super low -96db. the most simple color profile is RGB, each color needs to be registered separately 24bit is actually 8bits for each channel. so with sound having 16bit, doesn't that counter your statement? we already have more steps than colors.

 

if you had to make a parallel between photo and sound, then it would obviously be brightness, not colors. and only the contrast ratio has actually any resemblance with dynamic range. so your point is just plain wrong/false/irrelevant(pick the one you like). apples and oranges.

 if you need to know, TVs specs for brightness are crap compared to what audio systems specs are for sound, the contrast ratios are crap(and usually the specs are lies so it's even worst). printers contrasts can be ok, but they're mostly crap too.

you talk about photo, but photo would be the studio recording of pictures, not the guy looking at the result in his house on a picture, a photographer can have pro needs and 24bit is cool for post processing, just like it is in audio studios. but we don't output our pics in 24bit for the public, same as music. we give nice 16bit and most of the times depending on the use we give light lossy format. exactly the same as music.

that's the problem with phony arguments, they can usually be turned around against you.

 

the variation in bit depth only tells about a change of voltage or air pressure depending where you look at it. all the sound of an album can fit on 1 axis(and it does for 1 ear), we can use a second axis for time, but there are ways to go around that and that's more about sample rates than bit, so let's ignore that for a time^_^.

 

recorded sound comes down to how many discrete values we need on that one axis to express all the sound we can discern. so let's see what that is for real, and not just according to wishful people:

-people don't seem to be able to really notice less than 0.1db variations, and they also don't seem to hear something that output less than 1db. (so what would be the point of having 200000values between 0.02db and 0.03db if we hear the same thing with all those 200000? we're humans, our own specs aren't that great.

-110db being harmful to us after only a few minutes, this is obviously the maximum we should ever use.

-a calm ambient room has at least 20db of noise that our brain discards voluntarily as it discards so many information all the time to help us behave better than mad people on too much cocaine. so we seem to have a use for 110-20=90db with the best situation possible (listening at 85db instead of 110 would already reduce our needs and usable dynamic range). but let's say we need 90db of dynamic with at least 0.1db increment to listen to music.

I'm being generous here because as S.E mentioned somewhere, when listening to music, 60DB seems to be the best dynamic we can actually identify. still let's go for 90db of needed dynamic.

so having 90/0.1=900 discrete values could seemingly cover our daily needs for sound. and 10bit could cover that with 1024 values(K7 tapes did just that or even less, in case you're thinking I'm making stuff up and my numbers are too low to be true).

the problem with 10bits is noise, we would have quantization noise around 10*6=60db below zero. something that would be very audible on calm passages.

 

so with 16bit we push that noise down to -96db and in the process we gain 2^16=65536 completely unique values. more than 50 times what we would seem to actually need. and you're here stating as if it was obvious, that we should go for more.

you can always go for the "more is better" for no actual reason, just like people can buy a 10watt amp to power IEMs. but at some point you need to stop. what is a good number? 100times what we need? 1000times? usually I'm ok with twice as much or 3 for the price of 2, but you might be onto something here. "buy 1 get 16000free!" it sure would sell just fine.

sorry for being so short sighted and settle for the cheap 50times more than needed redbook and its ugly inaudible noise.

 

 

anyway back to the actually working analogy of brightness and sound, they have a lot in common:

we can hear a super quiet sound if there are no other sound at the same time or just before, and we can also hear an explosion at 120db. so 24bit covers really our min and max possible hearing capabilities with 144db of dynamic. and that's why people like you argue in favor of 24bit.

just like our eyes can see a super feint light from a star at night and the next day see the sun. great dynamic here, we have an effective range usually accepted as 24f-stops(for people unacquainted just imagine 1f-stops as 1bits, that works pretty well for digital media).

so both have a range clearly superior to what gears are offering us and in fact a dynamic pretty similar. so we can complain like kids who heard only half of the story, or try to understand why our reasoning is false. and why the people who created the stuff we use and knew a lot more than us about it, decided that it was enough.

 

when the sun is high in the sky, you don't see the stars, just like when you're hearing a sound at 90db, you're not hearing the sounds at 5db or 10db. the stars don't go away at dawn and taking a picture at lunch that doesn't show the stars won't compel you to cry out loud about all the image you're missing and how we should get better cameras.

so why are you guys doing just that with 24bit audio?

you do not see the stars in daylight, just like you do not hear the quietest decays of an instrument when another is playing loud at the same time. you can pretend you do, but you do not. in both situations they are present, but the limit is human, not material.

 just like the sun in your face will prevent you from seeing clearly the people under the trees, the loud part of music prevents you from hearing the quietest part of a track. asking for what you could never hear even if the band was in front of you, does seem like the strangest obsession. and it would be laughed at in all areas like

Quote:
(machine running, CPU, photography, robotics, cars sensing the road and on and on)  except just this ONE area,  sound. Very strange.

very strange indeed. only in that one area, will people come to believe that they understand a technology and try to convince us, when they really just know how to press play.

 

you say you can recognize the 24bit tracks of the Beatles, it may well be. it could also be luck, maybe it's because your player doesn't deal with 16 and 24 bit the same way and that leads to a different sound(some IMD because of the higher sample rate?), or the masters are different(did you check for differences in audacity?), or maybe the 24bit file has a little more delay before starting on your dap because it takes 0.08s more to buffer the 24bit one, and your clever brain associated that delay with high quality sound... it could be a lot of things, but it is not because there is more sound or more "precision" on the 24bit track. and that's a statement, not an opinion.

and having experienced something with one system isn't enough to tell that it's the same for all audio systems. in fact it's not even enough to say if it's about 16 vs 24bit, could be the sample rate or the system itself being picky with something. you're just turning assumptions that what you hear are the 8more bits, into a false conclusion.

 

and I'm writting all this useless stuff nobody will read when I'm not even against 24bit music(I'm talking bit, not sample rate!!!!!!). that's how much I dislike it when people use false argumentation to mislead others.

16bit is not enough because sound can't get any better, it's enough because it's already more than what us puny humans can hear. just like we don't go around asking for pictures at more than 300dpi, because 300dpi for something we're holding at reading distance is already more than the resolving power of our eyes. more would bring nothing to us for pictures.

more than the best we can hear will still just amount to the best we can hear. so no! more is not better for us here, it just makes bigger files.

post #1820 of 1923
popcorn.gif
post #1821 of 1923

He won't be coming back.

post #1822 of 1923

I love this thread.

post #1823 of 1923

We are all hear, because we are not all their.

post #1824 of 1923

I hope he comes back. lol

post #1825 of 1923

After reading some other forum, where similar discussion was going on,
with some aggresivness going on both sides, I've decided to make a story...
Most of this stuff was talked many times before, but whatever...

Little story of KrzysiekK

KrzysiekK was an extreme audiophile, paying lots of money into speakers and cables, always checking if something could be improved in his audiochain.
At some time he found out about new 24bit formats, and thought to give it a try.
But his more technically advanced friends told his that this is wast of money and space, giving him some technical reasons.
The first thing which KrzysiekK encounter was the 8bit vs 16bit test of song of his famous artist Psy 'Gangam style',
he obviously saw that 8bit is no good in capturing complexities of Psy voice (and scored 10/10 in the guessing game!).
He looked at this recording and found out to be a very compressed stuff, having all the sounds need to the maximum output value.
So he concluded (with the ease of guessing) that he might need something like 40-48db of dynamic range from the
average sound level value (and in this case the average roughtly equates to maximum and minimum value due to compresion).
He gave that matter some thought and gathered, remembering some scientific article, that if THD is audible at 1%, and they claim that 0.3% can also be audible
than it would be better to have 40-50db of dynamic range below the lowest sound (at least from the lowest sound present continously for some part of the music piece - like brushing of hihats)
He also remebered some article which claimed that even changes of magnitude -50db from the main sine can be distinguishable of the first harmonic.

But all that was OK in the Psy song, most of the noises where -6db from top, so he still had 90db of dynamic range for the harmonics and stuff
So now he went to his favorite music (not counting Psy of course), beeing it classical music.
(And he did the check without nowing what dither is).

He went to the opera, sat in the first row, and asked the orchestra to play as loud as they can.
He mesured (as he now uses scientific method as his friend) 120db, he also asked that they made some quite sound,
and someone brushed hihats) which resulted in 40db on the SPL-meter.

He now went home and started doing the math:
120db at maximum, that means that with the minimum sound will be 24 db, and the next one will be 30db (6db jump!).
But the quite sound has 40db, and he needs like 40-48 db of dynamic range to grasp the harmonics (at few kHz human sensitivitly goes even to minus few db)
And then the first harmonic will only have values of 0db,24db or 30 db which does not seem to be right
(he will be able to change the volumn of the main sine from 40 to 41db, but not of the first harmonic).

So he went to his friends with his concerns, and they told him that this is no problem, because of dithering.
So he asked his fellow musicians to use that magical (statistical) trick and bring him back the recordings.

He sat put on his recording, and while the sound of brushing of hihats was not 100% perfect in the previous recording,
he now got annyoing 24db noise coming out of his speakers (and he has a very quite room, and very isolating heaphones as it is a must for audio purist)

So he was disenchanted with the whole scientific stuff, thought that the 16bit is good for Pop music (and they even compresing the dynamic range in those)
and started buing DSD records.
(btw. when Psy is going to release his all time hit using DSD, or at least 96kHz/24bit as Lady Gaga)

post #1826 of 1923

I'm sure you understand what you're trying to say. That makes one of us.

post #1827 of 1923
Quote:
Originally Posted by xdog View Post
 

After reading some other forum, where similar discussion was going on,
with some aggresivness going on both sides, I've decided to make a story...
Most of this stuff was talked many times before, but whatever...

Little story of KrzysiekK

KrzysiekK was an extreme audiophile, paying lots of money into speakers and cables, always checking if something could be improved in his audiochain.
At some time he found out about new 24bit formats, and thought to give it a try.
But his more technically advanced friends told his that this is wast of money and space, giving him some technical reasons.
The first thing which KrzysiekK encounter was the 8bit vs 16bit test of song of his famous artist Psy 'Gangam style',
he obviously saw that 8bit is no good in capturing complexities of Psy voice (and scored 10/10 in the guessing game!).
He looked at this recording and found out to be a very compressed stuff, having all the sounds need to the maximum output value.
So he concluded (with the ease of guessing) that he might need something like 40-48db of dynamic range from the
average sound level value (and in this case the average roughtly equates to maximum and minimum value due to compresion).
He gave that matter some thought and gathered, remembering some scientific article, that if THD is audible at 1%, and they claim that 0.3% can also be audible
than it would be better to have 40-50db of dynamic range below the lowest sound (at least from the lowest sound present continously for some part of the music piece - like brushing of hihats)
He also remebered some article which claimed that even changes of magnitude -50db from the main sine can be distinguishable of the first harmonic.

But all that was OK in the Psy song, most of the noises where -6db from top, so he still had 90db of dynamic range for the harmonics and stuff
So now he went to his favorite music (not counting Psy of course), beeing it classical music.
(And he did the check without nowing what dither is).

He went to the opera, sat in the first row, and asked the orchestra to play as loud as they can.
He mesured (as he now uses scientific method as his friend) 120db, he also asked that they made some quite sound,
and someone brushed hihats) which resulted in 40db on the SPL-meter.

He now went home and started doing the math:
120db at maximum, that means that with the minimum sound will be 24 db, and the next one will be 30db (6db jump!).
But the quite sound has 40db, and he needs like 40-48 db of dynamic range to grasp the harmonics (at few kHz human sensitivitly goes even to minus few db)
And then the first harmonic will only have values of 0db,24db or 30 db which does not seem to be right
(he will be able to change the volumn of the main sine from 40 to 41db, but not of the first harmonic).

So he went to his friends with his concerns, and they told him that this is no problem, because of dithering.
So he asked his fellow musicians to use that magical (statistical) trick and bring him back the recordings.

He sat put on his recording, and while the sound of brushing of hihats was not 100% perfect in the previous recording,
he now got annyoing 24db noise coming out of his speakers (and he has a very quite room, and very isolating heaphones as it is a must for audio purist)

So he was disenchanted with the whole scientific stuff, thought that the 16bit is good for Pop music (and they even compresing the dynamic range in those)
and started buing DSD records.
(btw. when Psy is going to release his all time hit using DSD, or at least 96kHz/24bit as Lady Gaga)

 

you should tell at the beginning that it isn't a fiction story, but a science fiction story.(after all it's fitting for sound science ^_^)

I mean crazyzic(I believe it is the correct spelling on earth, but is it crazysick instead?) is listening at the end to a 16bit record with all 96db of dynamic range used by music. where does that happen? I own hundreds of albums and couldn't find anything above maybe 75db of used dynamic(usually some room noise recorded on the album). so I'm guessing ... parallel universe?

he's complaining about 24db randomized noise while having the loudest part of the music reaching 120db. that brings a new question, what's the name of crazyZ's species? because obviously it's not human.

he has a listening room that has lower noise than your average recording studio(25/30db), so I'm guessing he's listening in a very quiet spaceship while in orbit around some planet.

the 24db of quantization noise being what is annoying him, brings the question of the speakers. what unknown technology does he use? because with even the very best loudspeakers, at 120db, the level of distortion would be so high that 24db of randomized noise would be the last of his problems. let's dream and pretend that his speakers have 0.1% distortion at 120db(lol this is definitely  an optimistic SF number). then the resulting sounds of harmonics and whatever, would be up to 60db ^_^. so obviously those guys in your story invented a new technology with speakers that have 0.00001% distortion, and that's why the quantization noise is so annoying when listening on the spaceship.

post #1828 of 1923

Actually i think the noise he is hearing is Tinnitus (ringing of the ears) after listening at 120db. Most likely your extreme audiophile friend is deaf by now.

Even 85db can cause damage to your hearing if exposed to it for a long time and every +3db up from that halfs the time you can be exposed to it.

 

Nice science fiction story indeed.

post #1829 of 1923
Quote:
Originally Posted by bigshot View Post
 

I'm sure you understand what you're trying to say. That makes one of us.

I'm going to steal that line one day and I'm not even going to credit you. :D

post #1830 of 1923

Actually, I think that is an Oliver Hardy line and I didn't credit him either!

New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Sound Science
Head-Fi.org › Forums › Equipment Forums › Sound Science › 24bit vs 16bit, the myth exploded!