24bit vs 16bit, the myth exploded!
Dec 13, 2019 at 8:50 AM Post #5,386 of 7,175
Anyway, the Dell monitors are calibrated from the factory (they even come with a unique print out to show the results) so any additional tweaking would be a marginal improvement at best.

Even though, Dell still does not know what environment these monitors will ultimately be used in. Dark bedroom, sunny office, all have an impact on appearance of the image.

Brightness and contrast should still be calibrated for local lighting conditions, even if color and grayscale are certified and can usually be left alone.
 
Dec 13, 2019 at 9:38 AM Post #5,388 of 7,175
[1] I wasn't talking about resolution. Dynamic range is a separate subject.
[2] With digital imaging, resolution is the number of pixels or dots in a given area. ... There is no such thing as infinite resolution: if you were able to go up close to a billboard (which can be as little as 20dpi), you would start seeing softness before just seeing individual points.
[3] Also, content creators need larger dynamic range of their source files to be able to pull up detail in shadows or recover blown highlights (especially when converting HDR images to monitor color spaces). Try post processing an image that doesn't have enough DR, and you'll either see noise or black splotches in shadows as well as complete white splotches in highlights.
[4] I realize there is not a 1:1 comparison of digital sound reproduction to image reproduction. However, it is valid to understand respective technology and see what analogies there are.

It's possible I have misinterpreted your post and am therefore going to inadvertently misrepresent it. If that's the case, I apologise in advance but it's still worthwhile as it goes to the heart of the OP:

1. In the case of digital audio, resolution and dynamic range are effectively exactly the same thing.

2. This really is a fundamental difference between digital imaging and digital audio! With digital imaging (as I understand it) we have a fixed output, an image is recreated using a fixed number pixels which when recreated correspond to (for example) a fixed number of LEDs. The more pixels/LEDs the higher the resolution but obviously we cannot have an infinite number of pixels, as that would require an infinite amount of data, and we cannot have an infinite number of LEDs, as that's a physical impossibility and therefore with digital imaging, as you say, "There's no such thing as infinite resolution", the only question is how many points (pixels/LEDs) we have and under what conditions that number exceeds the capabilities of the human eye. This is completely different to how digital audio works, the analogue signal output reconstructed from digital audio data does not have ANY fixed points, is not reproduced by a finite array of say LEDs and having more fixed data points (pixels) does not have any effect on resolution. Therefore, there IS "such a thing as infinite resolution" in digital audio, in fact, the whole principle of digital audio is based on infinite resolution (Shannon/Nyquist)!

Despite my ignorance of digital imaging, I'll attempt an analogy: Let's say we have a perfect circle which we want to capture and reproduce as a display graphic. We can capture/measure various points on the circumference of our circle as pixel data and then output that pixel data to say LEDs. The more pixels and corresponding LEDs we have, the more accurate (higher resolution) our reproduced circle will be. There is another way though, we can measure just 2 points on the circumference of our circle, which we store as data, mathematically define a perfect circle that bisects these two points and we now have infinite resolution! As I understand it, these two different methods define raster graphics and vector graphics respectively and digital audio is analogous with vector graphics, not raster graphics. This analogy fails in the final step though, because all displays have a fixed/finite number of LEDs, so our vector graphic has to be rasterised accordingly and we're stuck with that finite resolution defined by the number/density of LEDs. While with sound/audio reproduction we do not have LEDs (or any audio equivalent), we can in effect directly output that vector graphic without rasterisation, thereby maintaining infinite resolution! The limiting factor with sound capture and reproduction is therefore not digital audio but the laws of physics pertaining to the analogue input and output signals (EG. Thermal noise and transducer inefficiency).

3. Again, this is a massive difference between digital audio and digital imaging. 32bit float audio post processing (mixing) has been around for 20+ years which theoretically could encode a dynamic range of 1673dB. However, as a sound wave is the compression and rarefaction (variations in pressure) of air molecules, at 194dB the rarefaction portion of the wave would be a total vacuum and as we can't have more than a total vacuum, we can never have a sound wave greater than 194dB, beyond that point we can only have a shock wave. Being even more silly, a shock wave of 1100dB would so massively compress the air molecules that a (5kg mass) black hole would form! In digital audio, our processing environment massively exceeds what can ever actually exist in the real world, let alone what transducers or human senses are capable of.

4. In theory I would agree but in practice, as they're quite different, complex technologies, it's difficult to understand both of them and therefore analogies between them tend to be either invalid or only valid up to a point (and are then invalid again). So even in the latter case, when an analogy might be a useful aid to explaining/understanding a specific aspect of digital audio, it's more than likely to lead to a misunderstanding of other aspects and of digital audio as a whole.
[1] We haven't reached the DR limits of vision with imaging, and
[1a] there are very different applications where resolution comes into play. A 46 mega pixel camera is over-kill for an image that gets scaled down to say a 1024 web image. But a person may still want that original size for making a high quality print at a large size.
[2] While imaging still has room for improvement, I would agree that we've reached the limits of fidelity with audio in relation to human perception....and I leave it to others who may want to expose themselves to sound peaks over 100dB or want to argue what processing method (in relation to 1bit to 16bit to 24bit) is "best".
1. This is largely addressed by my point 3 above, we exceeded the DR limits of hearing with digital audio decades ago and 20+ years ago exceeded what can even be reproduced according to the laws of physics.
1a. There's no analogy for this with audio!

2. I would point out that sound peaks and dynamic range (and therefore number of bits) are unrelated. In the real world of music gigs, an audience member might get peak levels of 120dB at an exceptionally loud EDM/rock/pop gig (if they're close to the speakers) but a dynamic range of only 40dB, for which 8 bits or so would be sufficient. While at a large, loud symphony gig, sitting close to the orchestra you might experience 96dB or so but a dynamic range of nearly 60dB, for which 11bits or so would be sufficient. From the perspective of an audience member, there is no real world music circumstance that exceeds (or even comes close to) the dynamic range of which 16bit is capable. In other words, 16bit digital audio already exceeds what actually exists in the real world (of music gigs), regardless of human perception!

G
 
Last edited:
Dec 13, 2019 at 12:29 PM Post #5,389 of 7,175
1. In the case of digital audio, resolution and dynamic range are effectively exactly the same thing.

Wow, and here I thought DR was about loudness, and resolution about frequency:confused:

The limiting factor with sound capture and reproduction is therefore not digital audio but the laws of physics pertaining to the analogue input and output signals (EG. Thermal noise and transducer inefficiency).

And even after my basic descriptions, you don't think photography has similar issues as well??? One limiting factor of DR with the camera level is getting the most out of the black point (where there is a cut off with noise floor) vs white point (the value of saturation at exposure). Different brands have different approaches at setting black point and white point (based on how their own sensor/ADC is at having a SNR and full saturation point).

3. Again, this is a massive difference between digital audio and digital imaging. 32bit float audio post processing (mixing) has been around for 20+ years which theoretically could encode a dynamic range of 1673B.

I'm amazed how much you've written and taken various posts out of context. Of course an audio file is not the same as an image file: but I would have hoped that you might have read enough of my posts to understand that there are many image formats that have more data than just one R,G,B channel. A source audio file will have different tracks of audio recordings. The same can be true for photographs (as in different layers), and video (that can have different tracks of video or audio sources). A 3D scene file is quite different. The file size itself can actually be pretty small as it comprises vector 3D mesh, procedural shaders, animation graph (that only needs to plot a change in movement), animation scripts (that simulate complex animation at render), and links to images for textures. An audio source, in comparison, can be larger as it has plots of data for each track based on a set time interval. That's ironic given 3D graphics can be way more processor intensive in rendering compared to other media. This even though the workflow with 3D projects is to render "passes" (separate objects and/or color or contrast settings). The final role in 3D projects is to composite all these passes (and possibly layer video and audio sources). Forgive me for nerding out, but I am passionate about cinematography. I learned from ILM that the basis of our current workflow of rendering in passes originates from the original Star Wars films....in which the first use of computers was a prerecorded motion track for cameras. ILM also funded the pioneers of digital imaging (Photoshop, and quite a few 3D technologies). By the 80s, ILM would even use this process of filming passes for TV broadcasts of Star Trek Next Generation. In a given scene in outer space, they would record different passes of the Enterprise model with just internal lights, outer lights, set lights, or other objects in that frame. Given the requirements and time involved with TV in that era, the source VFX filmed in Vistavision would then be scanned in for analog video editing. What's great about the situation of the show filming its VFX on film, is that the remastered HD version looks great (CBS spent the money to find and scan all VFX footage to composite in digital HD).

It's also not haphazard that camera manufacturers chose a Bayer sensor pattern ( 1 red, 1 blue, to 2 green) as it corresponds to component video space (where you have double the density of one channel for more efficient resolution and DR). Lastly, when it comes to 32bpc image formats, those are more specialized. People capturing images on their iPhone are not concerned with full 32bpc images. Since most photographs taken today are with default apps on cell phones, most image files are 8bpc jpegs. It's the photographers who want full DR of a given scene (that merge at least 3 different RAW exposures) and then tone-map to display DR, or it's the 3D artist that is using HDR textures for projecting raytracing (for realistic light simulation). I've also found that while 32bpc image files take up more space, they are noticeably faster at rendering (so with 3D processing, 32bit float doesn't just mean higher DR, but more efficient rendering).

I'm expanding on information about visual technology for those who are genuinely interested on the subject. Even consumers who have decided on a switch to HD, to now UHD, probably do have a passing interest in how that visual content is created.
 
Last edited:
Dec 13, 2019 at 12:47 PM Post #5,390 of 7,175
Why wouldn't the depth of the noise floor be related to resolution? I can see someone comparing color depth to bit depth, but I can see bit depth being involved in pixel sharpness too. I think images and sound are apples and oranges personally. However you compare factors involved in resolution, it's going to be a stretch. I guess it just depends on which direction you stretch it.

I know people here love to use analogies, but when it starts getting into microwave ovens and CRT vs OLED and unladen swallows, I tend to glaze over. When you've accomplished the practical reality of perfect sound recording for the purposes of listening to music with human ears, what point is there of going further? I guess it's like nuclear bombs... Is it better to be able to blow up the world 12 times instead of just 3? OH DAMN! I JUST DID IT TOO! STOP ME BEFORE I ANALOGY AGAIN!
 
Last edited:
Dec 13, 2019 at 1:11 PM Post #5,391 of 7,175
Why wouldn't the depth of the noise floor be related to resolution? I can see someone comparing color depth to bit depth, but I can see bit depth being involved in pixel sharpness too. I think images and sound are apples and oranges personally. However you compare factors involved in resolution, it's going to be a stretch. I guess it just depends on which direction you stretch it.

I know people here love to use analogies, but when it starts getting into microwave ovens and nuclear bombs and CRT vs OLED and unladen swallows, I tend to glaze over. When you've accomplished the practical reality of perfect sound recording for the purposes of listening to music with human ears, what point is there of going further?

Your question seems to be about images. There is going to be some apples to oranges comparison with image formats vs sound. There still is some correlation with aspects like file compression introducing artifacts, and factors of file structure based on DR and resolution. But the fundamental question of DR vs resolution with imaging is that they can be factors for having a high quality image...but are fundamentally different subjects. With exposure, the optimal quality in DR is one in which you don't see noise in your blackest shadow value, and have a full range of contrast with no blown highlights. Resolution is about optimizing pixel pitch for your given situation (and file size will also increase with your intended resolution to image size requirements). A higher quality, large scale image viewed up close is going to require good resolution and DR for the best perceived detail. The catch for these factors is that an increase in DR and resolution requires more file space, and can be over-kill if your final output is a small resolution jpeg. One reason why 4K video streaming is becoming a popular format is that there's a newer compression format (h.265) that can reduce file size, but still maintain good quality UHD resolution, Dolby Vision color space, and DD+ Atmos standards for the common broadband speeds now.
 
Last edited:
Dec 13, 2019 at 1:52 PM Post #5,392 of 7,175
Wow, and here I thought DR was about loudness, and resolution about frequency
Maybe you've learnt something then? DR is not about loudness, as I stated in my previous post. DR in the digital domain is defined by bit depth, due to the resolution of different bit depths defining the digital noise floor, as explained in to OP. Frequency is defined by the sampling rate but provided it is at least two times the audio frequency, resolution is effectively infinite.
And even after my basic descriptions, you don't think photography has similar issues as well???
Not according to your descriptions! According to your descriptions, even the highest bit depths and UHD video formats can't even represent the full range the human eye is capable of, let alone levels trillions of times greater then it's even possible for a sound wave to exist!

G
 
Dec 13, 2019 at 2:01 PM Post #5,394 of 7,175
The real limitation of electronics is about 20-bit anyway, you're not getting anything accurate beyond that as far as dynamic range goes as the noise inherent in electronics and capacitors is at that level; the rest is just noise.
 
Dec 13, 2019 at 2:07 PM Post #5,395 of 7,175
The thing about sound and images that does correlate is that for the purposes of the intended use for the file, there is a point of transparency where more zeros and ones aren't going to do anything but increase file size. That applies to compression too. If your purpose is to look at a picture on your computer monitor, odds are a medium sized JPEG that matches your screen resolution is all you need. You can increase the file size, but at your intended viewing distance with the entire image visible, it won't make a lick of difference. That is true of audio too. If you want to listen to a Mozart piano concerto in your living room, an AAC file at 256 VBR ripped from a CD is all you need.

The problem with audiophiles (and armchair photo nuts) is that they rarely take into account the intended purpose, they have no clue about the thresholds of human perception, and they keep creating "what ifs" in their head to continually push the goalposts back over and over again.

The truth is that the quality of an image lies in its composition, lighting and exposure, not the file size. And the quality of recorded music depends on the musicality and musicianship of the musicians and the balance of the mix. We've gotten to the point where "good enough" is already far into the range of overkill. We should be happy with that, not trying to think up excuses why we might need more.

The part that can still use improvement is the last step in the line... the transducers and the display. Not the files themselves.
 
Last edited:
Dec 13, 2019 at 2:13 PM Post #5,397 of 7,175
Maybe you've learnt something then? DR is not about loudness, as I stated in my previous post. DR in the digital domain is defined by bit depth, due to the resolution of different bit depths defining the digital noise floor, as explained in to OP. Frequency is defined by the sampling rate but provided it is at least two times the audio frequency, resolution is effectively infinite.

I already knew about this, and it certainly was not based on your previous posts. Such as your last in which you claim audio DR being relational to one given time instead of full system (where, for example accepted definition of DR for 16bit audio is 95dB).

Not according to your descriptions! According to your descriptions, even the highest bit depths and UHD video formats can't even represent the full range the human eye is capable of, let alone levels trillions of times greater then it's even possible for a sound wave to exist!

I should stop responding to you, as you're either being purposely obtuse or still not understanding some fundamentals about photography. The topic was recorded DR of source. With digital imaging, we're still limited to realized DR of the conversion of light source to ADC...then also source file to final output. A captured exposure is limited to acceptable SNR to highest saturation point (which with ADC, also has issues of amplification of signal). Refer back to previous posts, and you'll see I stated many digital cameras can now record 14bpc at base ISO (some are even better: the best RED cameras actually can record 16bpc). So how then are there photo formats that are 32bpc? It' because you can set at least 3 exposures (one "normal", one "over-exposed", one "under-exposed") and then merge them in a single frame. Because that process requires time, it's incompatible with situations like video (where you do have to expose a frame at certain fps).
 
Last edited:
Dec 13, 2019 at 2:17 PM Post #5,398 of 7,175
A noise floor of -95dB is never going to be audible under commercially recorded music. It is overkill for any normal purpose of listening I can think of. (and most abnormal ones too!)
 
Dec 13, 2019 at 2:47 PM Post #5,399 of 7,175
The audio and video analogies are crap because even if we carefully pick the variables and explain the context for both, those contexts will be different!
The very way the bits are used is different. In a picture for a given color channel, the minimum and maximum value of however many bits you use, will always mean both extremes of the predefined spectrum! To put it in the most obvious and intuitive way for those who aren't familiar with the stuff, consider a digital B&W picture. There is only that one "color" channel going from pure black to pure white. If you use 8bits to give a pixel its "color", the max value at 255 will be pure white and zero pure black. If you use 1bit encoding, now 1 will be pure white and zero pure black. It's the same for each color channel and also the same idea for the total color spectrum(as each channel works that way, the sum of colors also does). The difference with more bits, lies in how many increment values are available within that predetermined range.

Have fun using that behavior to describe bits in PCM files. :imp:


@Davesrose . I'm sorry but the more stuff you bring up, the messier it looks.
It started with the 1bit DAC that sounded good to you(and probably does), which has really little to do with 16 or 24bit audio files. When discussing 16 vs 24bit PCM, it is obviously assumed that we're comparing files of similar sample rate. A one bit DAC or most DACs, won't keep the original sample rate. And will use and abuse noise shaping so that the moronic bit value does not lead to a SNR of 6dB. Those stuff are not mysterious in any way, but of course if we keep looking only at a bit value out of context while ignoring the rest, it looks weird or even impossible. And it's not like 32bit DACs are 32bit, so now to add to the confusion about how a DAC will decide to skin the PCM cat, we have marketing entering the fray while the chip will stick to just a handful of bits and run as a typical delta sigma DAC(like your 1bit DAC).
So your anecdote was fine, but also not actually saying anything at all about PCM bit depth. And since, we got camera sensors, TV screens, processing, color channels, HDR... Is there a point to all this? Because to be clear, the argument isn't that we can't pull some analogies from the video world. Of course we can. It's that most of the time we shouldn't to avoid misunderstandings. IMO that series of posts is a solid case supporting that we should pick such anecdotes with more care. :stuck_out_tongue_winking_eye:
 
Dec 13, 2019 at 3:10 PM Post #5,400 of 7,175
The audio and video analogies are crap because even if we carefully pick the variables and explain the context for both, those contexts will be different!
The very way the bits are used is different. In a picture for a given color channel, the minimum and maximum value of however many bits you use, will always mean both extremes of the predefined spectrum! To put it in the most obvious and intuitive way for those who aren't familiar with the stuff, consider a digital B&W picture. There is only that one "color" channel going from pure black to pure white. If you use 8bits to give a pixel its "color", the max value at 255 will be pure white and zero pure black. If you use 1bit encoding, now 1 will be pure white and zero pure black. It's the same for each color channel and also the same idea for the total color spectrum(as each channel works that way, the sum of colors also does). The difference with more bits, lies in how many increment values are available within that predetermined range.

Have fun using that behavior to describe bits in PCM files. :imp:


@Davesrose . I'm sorry but the more stuff you bring up, the messier it looks.
It started with the 1bit DAC that sounded good to you(and probably does), which has really little to do with 16 or 24bit audio files. When discussing 16 vs 24bit PCM, it is obviously assumed that we're comparing files of similar sample rate. A one bit DAC or most DACs, won't keep the original sample rate. And will use and abuse noise shaping so that the moronic bit value does not lead to a SNR of 6dB. Those stuff are not mysterious in any way, but of course if we keep looking only at a bit value out of context while ignoring the rest, it looks weird or even impossible. And it's not like 32bit DACs are 32bit, so now to add to the confusion about how a DAC will decide to skin the PCM cat, we have marketing entering the fray while the chip will stick to just a handful of bits and run as a typical delta sigma DAC(like your 1bit DAC).
So your anecdote was fine, but also not actually saying anything at all about PCM bit depth. And since, we got camera sensors, TV screens, processing, color channels, HDR... Is there a point to all this? Because to be clear, the argument isn't that we can't pull some analogies from the video world. Of course we can. It's that most of the time we shouldn't to avoid misunderstandings. IMO that series of posts is a solid case supporting that we should pick such anecdotes with more care. :stuck_out_tongue_winking_eye:

What started the latest round of picture analogies is my response to post number 1 on this very thread: where Gregorio claimed it's easy to see bit depth with images. My preface for that was it might be common for older people who had experience with color spaces that were below 256 colors. From what I could tell (granted I haven't read every page), this analogy was never fully addressed. With my engagement with Gregorio, I can tell he hasn't considered image capture and current photo and video formats...so while the topic isn't specifically sound related, I think other members might find my posts informative for the latest technologies in photography and video. If you want the thread to continue with *this format* or *this mastering process* is best, then I'll let it be:blush:
 

Users who are viewing this thread

Back
Top