TEST INVITE: 24-bit vs. 16-bit Listening Test Part Deux... Daft Punk Edition
Mar 21, 2023 at 1:20 AM Thread Starter Post #1 of 67

Archimago

Head-Fier
Joined
Jun 3, 2008
Posts
54
Likes
31
Hey guys and gals, unfortunately don't get much time to visit many forums but wanted to send out an invite here on Head-Fi. I have a 16-bit vs. 24-bit listening test up if you have not seen it yet. This time, using a Daft Punk track instead of just classical as I did back in 2014. Should not be hard to try out and the results can just be entered on the blog page.

Can you hear the difference between 24-bits and 16-bits? Especially if you've never tried a test like this, download the samples and let me know on the survey response... Have your impressions counted as part of the final results.

Test closes May 1st - plenty of time to try out. Thanks in advance for all submissions!
 
Mar 21, 2023 at 2:43 AM Post #2 of 67
Hey guys and gals, unfortunately don't get much time to visit many forums but wanted to send out an invite here on Head-Fi. I have a 16-bit vs. 24-bit listening test up if you have not seen it yet. This time, using a Daft Punk track instead of just classical as I did back in 2014. Should not be hard to try out and the results can just be entered on the blog page.

Can you hear the difference between 24-bits and 16-bits? Especially if you've never tried a test like this, download the samples and let me know on the survey response... Have your impressions counted as part of the final results.

Test closes May 1st - plenty of time to try out. Thanks in advance for all submissions!
Already performed and submitted :wink:. As I strongly suspected, I could discern absolutely no difference between 16-bits and 24-bits. I must be deaf as a doornail :stuck_out_tongue_winking_eye:.
 
Last edited:
Mar 21, 2023 at 8:24 AM Post #3 of 67
Hey guys and gals, unfortunately don't get much time to visit many forums but wanted to send out an invite here on Head-Fi. I have a 16-bit vs. 24-bit listening test up if you have not seen it yet. This time, using a Daft Punk track instead of just classical as I did back in 2014. Should not be hard to try out and the results can just be entered on the blog page.

Can you hear the difference between 24-bits and 16-bits? Especially if you've never tried a test like this, download the samples and let me know on the survey response... Have your impressions counted as part of the final results.

Test closes May 1st - plenty of time to try out. Thanks in advance for all submissions!
 
Mar 21, 2023 at 8:28 AM Post #4 of 67
@Archimago is one of my favourite blogs. I’ll signal-boost it!
 
Mar 21, 2023 at 8:32 AM Post #5 of 67
Already performed and submitted :wink:. As I strongly suspected, I could discern absolutely no difference between 16-bits and 24-bits. I must be deaf as a doornail :stuck_out_tongue_winking_eye:.
What was your chain for the test?

I also feel skeptical that the differences between 16 and 24 bits at 44.1 KHz would be particularly audible. However, SACDs sound better to me than CDs so I think at some point the increased resolution is helpful and it is not clear whether bit depth or sample rate has more influence but I suspect sample rate is more important. That said, it is often difficult to compare the same masters. I think that is not a problem with dual layer SACDs though.

I can hear artifacts in MP3 at 128 but significantly less so at 192.

The other thing is sometimes something worse on paper sounds better (i.e. more musical) for example, Sony did a good job iterating their ATRAC codec for minidisc. They sound much better than the sample rate would suggest.
 
Mar 21, 2023 at 2:43 PM Post #6 of 67
SACDs sound better to me than CDs so I think at some point the increased resolution is helpful and it is not clear whether bit depth or sample rate has more influence but I suspect sample rate is more important.

SACDs often have different mastering than CDs, so it isn't unexpected that they might sound different. The way to compare whether increasing data rate is responsible for the difference is to take a 24/96 recording and bounce it down to 16/44.1 and compare. That way, you're comparing the data rates, not the mastering.

As for minidisc, better fidelity is better fidelity. If you are going to apply some sort of signal processing to "sweeten" the sound, it's better to do that using a DSP where you can adjust the degree and kind of processing precisely. Running sound through a process that degrades it without being able to control it isn't the best way to do that.
 
Last edited:
Mar 21, 2023 at 2:44 PM Post #7 of 67
Already performed and submitted :wink:. As I strongly suspected, I could discern absolutely no difference between 16-bits and 24-bits. I must be deaf as a doornail :stuck_out_tongue_winking_eye:.
Re: deaf as a doornail. Um. Dude. I clicked on your signature‘s GEAR hyperlink and saw your rig. You are using some humdinger, kick-anus, quality kit. And those HD6xx Headphones are fantastic. I suspect your gear is giving you fantastic sound. :ksc75smile:

When I started this ridiculous hobby, I tried to see the difference between 16- and 24-bit. I just couldn’t pick it up. I suspect excellent recording engineers can make files using whatever bit-rate. Sound superb. Example: 1957 West Side Story (hell, I wasn’t even born yet [next-level recording, apparently])

I‘m a gear-head and love shiny things. 16-bit. 24-bit. 36-bit? I’ll try the test too try to hear the difference. It’s fun! I love screwing around with my perception of sound.
 
Mar 22, 2023 at 3:10 AM Post #9 of 67
You would go deaf trying to hear the difference between 16 and 24 bit audio. In order to hear the difference in noise floors, you would have to turn the volume up past the threshold of pain.
 
Mar 22, 2023 at 3:40 AM Post #10 of 67
Sennheiser HD600 via a custom Type 45 SET tube head amp (more info via my GEAR link in my signature).
That sounds like a very enjoyable setup but not particularly resolving.

I'm away from my main setups for a while now and cannot test but I would try with my HD 800 and LCD-3F off Audio-gd gear.

edited: I should have explained why. For example with the matrix adjustments (crossfeed, angle etc.) on my Phonitor I hear basically no difference with most of my headphones. But with the HD 800 I can hear a clear difference when making adjustments. So it is more likely your gear than your ears.

Also it helps to know what to listen to. Piano and cello are a very good test, they are often in the so-called presence range with similar frequencies to the human voice, the ears are sensitive to this region. If something is wrong you will notice it quickly. The other thing is transients like cymbals. On compressed recordings you don't hear the shimmer and the attack and decay are unnatural. On higher resolution recordings, you start to hear them naturally.
 
Last edited:
Mar 22, 2023 at 5:58 AM Post #11 of 67
There is nothing to correct in your post bigshot, but I elaborate it further:

You would go deaf trying to hear the difference between 16 and 24 bit audio.
Not if you only listen to the last few seconds of a track at insane level when the last notes are decaying into the noise floor, but who listens to music like that? Only those who WANT to hear the difference no matter what, but it makes zero sense. It is like comparing TV sets by looking at the pixels with a magnifying glass. That's not how we watch TV so those differencies do not matter at all. Only the differences seen on the couch 10 feet from the TV screen matter!

In order to hear the difference in noise floors, you would have to turn the volume up past the threshold of pain.
Yep, but as soon as you do that, the insane sound pressure level of the loud parts of the music raises your threshold of hearing temporarily and even permanently! That's why consumer audio formats do not need more than about 13 bits of dynamic range and even 16 bit is overkill let alone 24 bit! The overkill of 24 bit digital audio (11 bits) could store the whole dynamic range of vinyl!

-----

I believe it is very common among hi-res music listeners to think 24 bit compared to 16 bit gives the music more resolution at all levels so that the difference is audible, but this is not the case. The 16 bit version must be a truncated version from the 24 bit version for the 16 bit vs. 24 bit comparison to make sense. The truncation must use proper dithering and when it does, the dither is able to totally prevent truncation distortion from happening by trading quantization error noise to a little bit (couple of dBs) higher dither noise that sounds more pleasing (if one were to listen to quiet parts at insane level) and allows sounds to decay "into it" distortion free like in analog sound. People should understand that as long as dither is quiet enough (inaudible) at practical listening levels, and in 16 bit digital audio it is with a good margin, there is zero audible resolution difference between 16 bit and 24 bit. Even properly dithered 8 bit digital audio has the exact same resolution and fidelity as the 24 bit version it was truncated from, but in the case of 8 bit the dynamic range becomes too small and the dither noise is too loud (audible). This may sound unintuitive to many, but digital audio is often unintuitive. This and the fact that people often assume to understand digital audio better than they actuially do is often exploited on the (hi-res) ) audio market.
 
Last edited:
Mar 22, 2023 at 10:54 AM Post #12 of 67
That sounds like a very enjoyable setup but not particularly resolving.

I'm away from my main setups for a while now and cannot test but I would try with my HD 800 and LCD-3F off Audio-gd gear.
Actually, my tube amp is far from being overly colored, smooth, and/or gooey and though it runs darker (than brighter) it is in fact quite resolute... especially when driving a pair of HD600 cans.

Just got back from a friend's home whom I recently sold my Benchmark DAC3 and longtime favorite Benchmark DAC2 D too, as well as awhile back gifted my circa 2012 Audeze LCD-2 too that were going unused and gathering dust. He also has a pair of HiFIMan HE-6 and Susvara at hand and runs a number of head amps to include an old Audio-GD Master 9PN and a Chord Hugo. We individually performed the tests with his headphone setup using these mentioned cans and DACS, and had the same results... even though he is considerably younger than me (better hearing) neither one of us heard a difference between the two Daftpunk tracks.
 
Last edited:
Mar 22, 2023 at 1:09 PM Post #13 of 67
Also it helps to know what to listen to. Piano and cello are a very good test, they are often in the so-called presence range with similar frequencies to the human voice, the ears are sensitive to this region. If something is wrong you will notice it quickly. The other thing is transients like cymbals. On compressed recordings you don't hear the shimmer and the attack and decay are unnatural. On higher resolution recordings, you start to hear them naturally.

Bit rate has nothing to do with frequency response. It governs the noise floor.

Sampling rate governs the frequency range. We aren't talking about that in this test.

The frequencies of the human voice aren't affected by higher sampling rates. The human voice at a sampling rate of 44.1 is exactly the same as 96kHz. 16/44.1 covers the range of audible frequencies exactly the same as higher sampling rates do.

44.1 is more than enough to capture any musical transient. In fact, it's at least an order of magnitude faster than any musical instrument's transient.

16/44.1 PCM isn't compressed. You're talking about something completely different than the subject of this test.

You might want to read 71dB's post just above.
 
Last edited:
Mar 22, 2023 at 1:28 PM Post #14 of 67
There is nothing to correct in your post bigshot, but I elaborate it further:


Not if you only listen to the last few seconds of a track at insane level when the last notes are decaying into the noise floor, but who listens to music like that? Only those who WANT to hear the difference no matter what, but it makes zero sense. It is like comparing TV sets by looking at the pixels with a magnifying glass. That's not how we watch TV so those differencies do not matter at all. Only the differences seen on the couch 10 feet from the TV screen matter!


Yep, but as soon as you do that, the insane sound pressure level of the loud parts of the music raises your threshold of hearing temporarily and even permanently! That's why consumer audio formats do not need more than about 13 bits of dynamic range and even 16 bit is overkill let alone 24 bit! The overkill of 24 bit digital audio (11 bits) could store the whole dynamic range of vinyl!

-----

I believe it is very common among hi-res music listeners to think 24 bit compared to 16 bit gives the music more resolution at all levels so that the difference is audible, but this is not the case. The 16 bit version must be a truncated version from the 24 bit version for the 16 bit vs. 24 bit comparison to make sense. The truncation must use proper dithering and when it does, the dither is able to totally prevent truncation distortion from happening by trading quantization error noise to a little bit (couple of dBs) higher dither noise that sounds more pleasing (if one were to listen to quiet parts at insane level) and allows sounds to decay "into it" distortion free like in analog sound. People should understand that as long as dither is quiet enough (inaudible) at practical listening levels, and in 16 bit digital audio it is with a good margin, there is zero audible resolution difference between 16 bit and 24 bit. Even properly dithered 8 bit digital audio has the exact same resolution and fidelity as the 24 bit version it was truncated from, but in the case of 8 bit the dynamic range becomes too small and the dither noise is too loud (audible). This may sound unintuitive to many, but digital audio is often unintuitive. This and the fact that people often assume to understand digital audio better than they actuially do is often exploited on the (hi-res) ) audio market.

How does adding dither compensate for quantization distortion? In easy to understand terms, thank you.
 
Mar 22, 2023 at 2:44 PM Post #15 of 67
How does adding dither compensate for quantization distortion? In easy to understand terms, thank you.
Lets imagine you work part time in a job that pays you $940 per month, but the employer can only pay you cash with $100 bills. (this is the same as 16 bit audio compared to 24 bit audio that would mean exactly $940 payment because all bills and notes are available). Without dither the $940 would be rounded to nearest multiple of $100 and that would suck for you because you'd get paid only $900 every month. One solution is you get $1000 40 % of the time and $900 60 % of the time: After 5 months you have been paid $900 + $900 + $900 + $1000 + $1000 = $4700 = $940 * 5. All good, but what about rises to your salary? At same point you may make $956 for example and the earlier method doesn't work anymore. What if the amount of hours you make change from month to month? Is there a method to even out the salary in the long run even when the salary changes? Yes!! It is dithering! In this case it would work like this:

Every month your employer uses a random number generator to come up with a random number between -50 and 50 and adds that to your salary before rounding it to the nearest multiple of $100. On average nothing gets added, because random numbers between -100 and 100 averages to zero statistically,

The genius of this method is that the salary dictates the probability of the salary + random number getting rounded "up" or "down". If your salary raises a bit, the probability of it getting rounded "up" (say to $1000) increases. There is no really "rounding error" (quantization error/distortion) because it gets evened out statistically. Instead we have the dithering noise (-$50 to $50 added randomly). Since we have thousands of samples per second in digital audio, dither evens out rounding errors every small fraction of a second (audibly there is no rounding error).

Hopefully this was in easy to understand terms.
 
Last edited:

Users who are viewing this thread

Back
Top