24bit vs 16bit, the myth exploded!
Mar 15, 2021 at 11:17 AM Post #6,106 of 7,175
The highest note that has supposedly ever been sung is G10 at 25.1 kHz. Most music won't go quite that high though. :) For comparison, a typical soprano part rarely goes above "soprano C", which is C6 at a mere 1.0 kHz! Falsetto and coloratura singers can go higher than this though.

Most of the info above a certain frequency in the treble range will generally be overtones and timbral info, I believe, as opposed to actual "notes".
You must mean Geogia Brown?



I recorded her high singing and checked it out in Audacity. Her singing goes up to about 3.8 kHz, which is impressive (not the kind of "brown note" we usually talk about), but far from this ridiculous G10 claim. Maybe they mean significant harmonic overtones go up to 25.1 kHz? I used linear frequency scale, because here it works nicely.

Georgia Brown note.png
 
Mar 15, 2021 at 12:41 PM Post #6,107 of 7,175
You tell me. He was arguing here that because of his training, he could hear the difference between 16 and 24. He posted Foobar results to prove it. I asked him if he had changed the volume levels as he listened and he talked around my question, so I got more pointed and asked him if he gain rode the fade outs. He dodged it over and over and then got mad and went back to his own forum where he could delete the pesky questions like mine.

I think he was trying to impress us with his superhuman ears and when I pointed out that a noise floor below -90dB wasn't a noise floor under -90dB if you turn the volume up by +50dB, he didn't want to be challenged on it and just refused to answer. That guy has an agenda.

ADUHF, I said that the only purpose for 24 bit is to provide a deeper noise floor for sound processing in mixing and mastering. When you sit in your living room listening to Beethoven on a commercially recorded CD, there is absolutely no reason for it. Consumer sound processing doesn't get anywhere near making any difference either. 12 bit is sufficient for listening to music in the home. 16 bit is overkill. There is plenty of headroom there.

The only way to understand what is important and what isn't is to take some music tracks and run them through different kinds of degradation and see the effect in real world applications. Ethan Winer does that in the videos in my sig file. He takes a horrible buzzing noise and mixes it into music and drops it -10dB at a time. Take a guess where you can't hear it any more under the music... I think you will be very surprised. You can download his files and listen to them for yourself.

Without actually listening, specs are just abstract numbers on a page. Better numbers are better sound, right? Not always. To understand them with perspective, you need to translate those numbers to actual sound in a real world application. Then you know what -1dB sounds like as opposed to -10dB or -100dB. Numbers represent sound, but not always in an intuitive way. More is not always better. There is such a thing as good enough for human ears.

Dynamic expansion doesn't produce artifacts because of bit depth changes. It creates artifacts because there are many ways for the sound engineer who mixed the track to compress music. You have multiple variables, different ways to compress and different elements in the mix that can be compressed individually. Uncompressing it is like using a key to unlock a lock. If you don't know the exact kind of compression that was applied in the exact amount on the exact track at the exact point in the timeline, you can never uncompress it properly. You can only take a stab at it in one dimension across the whole track. The more you expand, the more artifacts you are going to get.

I don't think Amir's website was nearly as popular back then as it is today. He was still attempting to draw in an audience and memberships. I do recall parts of these discussions here and also over at Hydrogen Audio. The testing method was pathological in that the volume level was so high on a quiet part of the music that if the music were to be played in its entirety, there would be temporary hearing loss or audible threshold shift that would make it impossible to hear without resting for a short period of time in a quiet environment.

It wasn't only that the volume was increased significantly on a quiet section of some music, but that the test was only being done on a small section of the music. His logs were showing rapid switching back and forth over a 1-2 second part or the music. Once it was admitted that it would probably not be possible to identify any difference under normal listening conditions, the rest was just the typical back and forth exchanges where something was conflated and applied to something unrelated.
 
Mar 15, 2021 at 1:36 PM Post #6,108 of 7,175
The testing method was pathological in that the volume level was so high on a quiet part of the music that if the music were to be played in its entirety, there would be temporary hearing loss or audible threshold shift that would make it impossible to hear without resting for a short period of time in a quiet environment.

It wasn't only that the volume was increased significantly on a quiet section of some music, but that the test was only being done on a small section of the music. His logs were showing rapid switching back and forth over a 1-2 second part or the music. Once it was admitted that it would probably not be possible to identify any difference under normal listening conditions, the rest was just the typical back and forth exchanges where something was conflated and applied to something unrelated.
If you use a powerful magnifying glass on 8K video picture you can clearly see the individual pixels and incorrectly conclude 16K video format is needed in normal watching. The size of the magnifying glass you need tells you how much overkill/safety margin you have. In that sense these extreme tests are insightful, if you can interpret them correctly.

Why can't people just enjoy music instead of coming up with ways contradicting all practical listening scenarios to reveal "weaknesses" of 44.1 kHz/16 bit? Well, I know why. Money. Money becomes a monster if we allow it to become one.
 
Mar 16, 2021 at 4:05 PM Post #6,109 of 7,175
Once it was admitted that it would probably not be possible to identify any difference under normal listening conditions, the rest was just the typical back and forth exchanges where something was conflated and applied to something unrelated.

He never admitted that to me. I specifically asked him if he was looping quiet sections and gain riding and he refused to admit it, saying his “training” was responsible for his ability to hear the difference.
 
Mar 17, 2021 at 3:38 AM Post #6,110 of 7,175
If you use a powerful magnifying glass on 8K video picture you can clearly see the individual pixels and incorrectly conclude 16K video format is needed in normal watching. The size of the magnifying glass you need tells you how much overkill/safety margin you have. In that sense these extreme tests are insightful, if you can interpret them correctly.

Why can't people just enjoy music instead of coming up with ways contradicting all practical listening scenarios to reveal "weaknesses" of 44.1 kHz/16 bit? Well, I know why. Money. Money becomes a monster if we allow it to become one.
Video producer/nerd here....so I feel obligated to pull in realities of video standards. 4K is going to be the only video source for awhile. Most all movies have been done 35mm (restored in 4K in the last 15 years with popular movies) and only a few 70mm (restored 8K). The jump in 4K/UHD wasn't just resolution, but better color space/dynamic range. For me, this is the greatest jump. Dolby Vision grading is really a step up for a TV that supports it. There is no consumer 8K video standard (there are 8K out displays for games), and I'm really dubious that it's going to be a standard for so many years (it's only in the last couple years that Hollywood has started producing movies with a 4K digital intermediate). Also, when it comes to detail with a TV, a lot has to do with pixels. I have seen comparison videos of 4K OLED vs 8K QLED, and the OLED still wins for perceived detail because each pixel is more defined. (also, maybe there's no native 8K with a distributable format).
 
Mar 17, 2021 at 4:47 AM Post #6,111 of 7,175
Photo analogies don’t always relate well to sound. Neither do cars and wine.
 
Mar 17, 2021 at 8:18 AM Post #6,113 of 7,175
Photo analogies don’t always relate well to sound. Neither do cars and wine.
Yeah, they don't, but I used it to have magnifying glass as a visual analogy to upping volume at quiet parts...

--------------------------------------

About 8K video: 2K Blu-rays (when done decently) are good enough for me. I don't have 4K gear. For me the problem is availability of movies on physical media, not the resolution or color spaces.
 
Mar 18, 2021 at 8:56 PM Post #6,114 of 7,175
Video producer/nerd here....so I feel obligated to pull in realities of video standards. 4K is going to be the only video source for awhile. Most all movies have been done 35mm (restored in 4K in the last 15 years with popular movies) and only a few 70mm (restored 8K). The jump in 4K/UHD wasn't just resolution, but better color space/dynamic range. For me, this is the greatest jump. Dolby Vision grading is really a step up for a TV that supports it. There is no consumer 8K video standard (there are 8K out displays for games), and I'm really dubious that it's going to be a standard for so many years (it's only in the last couple years that Hollywood has started producing movies with a 4K digital intermediate). Also, when it comes to detail with a TV, a lot has to do with pixels. I have seen comparison videos of 4K OLED vs 8K QLED, and the OLED still wins for perceived detail because each pixel is more defined. (also, maybe there's no native 8K with a distributable format).

8k is included in the Rec. 2020 UHD spec. And there are already streaming services which offer some 8k content, including YouTube. Physical media seems a bit unlikely though for the near term. (If it picks up overseas though, who knows.)

4k UHD hit its peak for player sales a few years ago. And has been declining since then. And better compression would also be needed to squeeze the 8k files onto current UHD discs.

Also, when it comes to detail with a TV, a lot has to do with pixels. I have seen comparison videos of 4K OLED vs 8K QLED, and the OLED still wins for perceived detail because each pixel is more defined.

Folks said the same about plasma vs. LCD. OLED, which is emissive like plasma, has superior contrast ratio and angle of view to most LCD techs that I've seen though. So it would be my choice for those reasons. I believe burn-in and uneven wear is still a little more of an issue with OLED than with LCD though. So that might be another factor to consider. I'm not sure which has the superior color gamut, QLED or OLED. Contrast is usually king though when it comes to these types of video display tests.

QLED is a form of LCD, btw, for those who don't know.
 
Last edited:
Mar 18, 2021 at 8:57 PM Post #6,115 of 7,175
You must mean Geogia Brown?



I recorded her high singing and checked it out in Audacity. Her singing goes up to about 3.8 kHz, which is impressive (not the kind of "brown note" we usually talk about), but far from this ridiculous G10 claim. Maybe they mean significant harmonic overtones go up to 25.1 kHz? I used linear frequency scale, because here it works nicely.



^Exactly what I was referring to.

That is an amazing image btw of the recording!
 
Mar 18, 2021 at 9:09 PM Post #6,116 of 7,175
ADUHF, I said that the only purpose for 24 bit is to provide a deeper noise floor for sound processing in mixing and mastering. When you sit in your living room listening to Beethoven on a commercially recorded CD, there is absolutely no reason for it. Consumer sound processing doesn't get anywhere near making any difference either. 12 bit is sufficient for listening to music in the home. 16 bit is overkill. There is plenty of headroom there.

The only way to understand what is important and what isn't is to take some music tracks and run them through different kinds of degradation and see the effect in real world applications. Ethan Winer does that in the videos in my sig file. He takes a horrible buzzing noise and mixes it into music and drops it -10dB at a time. Take a guess where you can't hear it any more under the music... I think you will be very surprised. You can download his files and listen to them for yourself.

Without actually listening, specs are just abstract numbers on a page. Better numbers are better sound, right? Not always. To understand them with perspective, you need to translate those numbers to actual sound in a real world application. Then you know what -1dB sounds like as opposed to -10dB or -100dB. Numbers represent sound, but not always in an intuitive way. More is not always better. There is such a thing as good enough for human ears.

Dynamic expansion doesn't produce artifacts because of bit depth changes. It creates artifacts because there are many ways for the sound engineer who mixed the track to compress music. You have multiple variables, different ways to compress and different elements in the mix that can be compressed individually. Uncompressing it is like using a key to unlock a lock. If you don't know the exact kind of compression that was applied in the exact amount on the exact track at the exact point in the timeline, you can never uncompress it properly. You can only take a stab at it in one dimension across the whole track. The more you expand, the more artifacts you are going to get.

Hmm...

I think I agree with the basic thrust of what you're saying here. But I'm not sure it really addresses my question re bit depth, and changes to volume or dynamic range.

I obviously don't have the technical knowledge to debate the question intelligently though. So I suggest we call it a draw. :)

2k is also good enough for me on video, since that's all my little Samsung and BD player currently support.
 
Last edited:
Mar 18, 2021 at 10:37 PM Post #6,117 of 7,175
Mar 19, 2021 at 2:22 PM Post #6,118 of 7,175
I think I agree with the basic thrust of what you're saying here. But I'm not sure it really addresses my question re bit depth, and changes to volume or dynamic range. I obviously don't have the technical knowledge to debate the question intelligently though. So I suggest we call it a draw. :)

There is a good article linked in my sig about high data rate audio. It's called CD Sound Is All You Need. It's got a lot of useful info.
 
Mar 19, 2021 at 2:55 PM Post #6,119 of 7,175
8k is included in the Rec. 2020 UHD spec. And there are already streaming services which offer some 8k content, including YouTube. Physical media seems a bit unlikely though for the near term. (If it picks up overseas though, who knows.)

4k UHD hit its peak for player sales a few years ago. And has been declining since then. And better compression would also be needed to squeeze the 8k files onto current UHD discs.



Folks said the same about plasma vs. LCD. OLED, which is emissive like plasma, has superior contrast ratio and angle of view to most LCD techs that I've seen though. So it would be my choice for those reasons. I believe burn-in and uneven wear is still a little more of an issue with OLED than with LCD though. So that might be another factor to consider. I'm not sure which has the superior color gamut, QLED or OLED. Contrast is usually king though when it comes to these types of video display tests.

QLED is a form of LCD, btw, for those who don't know.
You're missing my point that content is not 8K. I know video codecs have protocols for 8K, and there are digital cameras that can record 8K video. But my point was that cinema movies have only recently been edited in 4K (you have to factor all the needed computer power for rendering so many layers of video cuts and VFX composites). Gaming seems like the main category that can easily go 8K. There may be some on YouTube who are recording in 8K and trying to post....I have seen other YouTubers try 4K and say it's not worth the extra effort for their content, and stay 2K. YouTube and Vimeo are the main sources that allow 8K standards: all other services that have their own content (commercial TV or movies) are 2K and/or 4K to have optimal bandwidth. Cinema content is not tied to physical discs: it starts with the studio digital intermediate then going to distribution to cinemas and home media (with home media, 4K streaming standards are becoming more popular...HBO finally getting on board with some new content being 4K Dolby Vision/Dolby Atmos).

When it comes to color gamut of display, one shouldn't just generalize by display type....as it really depends on the quality of panel and processor. However, Tom's Hardware seems to indicate that Sony's Master Series OLEDs have very high gamuts. OLED is my main display since I bought it for movies. I've since gotten a PS5....I don't believe I'll have issues with burn-in though, as manufacturers have done enough to prevent it (save if you were watching a news network 24/7 with the same graphics on screen). There's pixel shift for having a hud on screen. RTings also has long term burn in tests with OLEDs, and it is the TV networks that have continual graphics (with them having it on non stop) that start to show significant burn in.
 
Mar 19, 2021 at 3:24 PM Post #6,120 of 7,175
4K is perfectly capable of producing an image as good as film in theatrical projection. It's primarily a format for theatrical distribution and digitizing and archiving films for preservation purposes. 8K exists, but it is intended for films shot in large formats like 70mm, Cinerama, etc.

For the home, if you sit at the recommended seating distance, there really isn't any need for anything over 1080p or 2K (which are basically the same). The only benefit of 4K in the home is being able to get up and look closely at the screen.

And like audio, the resolution isn't the real determiner of image quality. I have DVDs that are mastered better than blu-rays.
 
Last edited:

Users who are viewing this thread

Back
Top