Why 24 bit audio and anything over 48k is not only worthless, but bad for music.
Feb 23, 2023 at 4:17 AM Post #3,511 of 3,525
I thought my linked video has a good overview of how there's different phase distortion with a live performance and a transducer. And the only time you might perceive a distortion is tone generators or an anechoic chamber.
It depends on the phase distortion and how much there is of it. With transducers (in a room other than an anechoic chamber) and live sounds there is phase distortion and it is perceivable. If it weren’t you wouldn’t be able to hear the difference between the acoustic of say a bathroom and a concert hall. In some cases, say just take a signal and repeat it a fraction of a second later, we can perceive a phase difference/distortion of below a millisecond. However, none of this (what I’ve just said or what was in the video) has any relevance to the OP, there is no phase difference/distortion in ADCs and won’t be any in any DAC (unless a particular DAC design deliberately adds it).

G
 
Feb 23, 2023 at 4:31 AM Post #3,512 of 3,525
It depends on the phase distortion and how much there is of it. With transducers (in a room other than an anechoic chamber) and live sounds there is phase distortion and it is perceivable. If it weren’t you wouldn’t be able to hear the difference between the acoustic of say a bathroom and a concert hall. In some cases, say just take a signal and repeat it a fraction of a second later, we can perceive a phase difference/distortion of below a millisecond. However, none of this (what I’ve just said or what was in the video) has any relevance to the OP, there is no phase difference/distortion in ADCs and won’t be any in any DAC (unless a particular DAC design deliberately adds it).

G
Yes, to put aside our differences in this thread: we both know that the ADC stage is about capturing what the analog is capable of. I think the main take home of the video was that the crossover filters of the speaker and room reflections is what would cause the most phase distortion in the system (as with playback, we aren't directly referencing what the acoustics could be with the live performance). I just think the video has a good overview in how it's reproduction and not ADC, and how it's something that's naturally occuring.
 
Last edited:
Feb 23, 2023 at 5:23 AM Post #3,513 of 3,525
It depends on the phase distortion and how much there is of it. With transducers (in a room other than an anechoic chamber) and live sounds there is phase distortion and it is perceivable. If it weren’t you wouldn’t be able to hear the difference between the acoustic of say a bathroom and a concert hall. In some cases, say just take a signal and repeat it a fraction of a second later, we can perceive a phase difference/distortion of below a millisecond. However, none of this (what I’ve just said or what was in the video) has any relevance to the OP, there is no phase difference/distortion in ADCs and won’t be any in any DAC (unless a particular DAC design deliberately adds it).

G

As someone who has studied acoustics in university I wouldn't call reverberation phase distortion. Reverberation copies the original signal with some frequency dependent attenuation and phase shifts, but it is massively more complex than phase distortion (frequency-dependent phase shift). That's also the reason why back the day it was so difficult to have good quality reverberators. Phase distortion changes the shape of the signal without changing the spectrum. Generally it is very difficult to even hear unless the phase distorted signal is fed to non-linear components because the shape of the signal affects non-linear distortion. Acoustics is about the distribution of sound energy in space and time (4-dimensionally).
 
Last edited:
Feb 23, 2023 at 5:55 AM Post #3,514 of 3,525
For others, can you define the difference between reverberation vs phase distortion? I'm wondering if it has difference in magnitude about reflections of sound: I think reverberation means another order of magnitude about reflections and perception of delay.
 
Last edited:
Feb 23, 2023 at 6:23 AM Post #3,515 of 3,525
...we understand a 16bit ADC does not really fill the full range of a 16bit file,

What do "we" mean by this? Does this mean you can only fill 14 bits out of 16 if you use 12 dB (2 bits) of safety margin while recording? How many bits do "we" need to fill? Who says "we" need to use 16 bits instead of 24 bit?

This thread is so active I struggle to follow the discussion and my posts are lousy and confused because of that. Sorry.
 
Feb 23, 2023 at 6:43 AM Post #3,516 of 3,525
For others, can you define the difference between reverberation vs phase distortion? I'm wondering if it has difference in magnitude about reflections of sound: I think reverberation means another order of magnitude about reflections and perception of delay.

Phase distortion is about changing the phase of frequency components compared to each other. It is linear distortion that changes the waveform. Below is an example: The upper waveform is 10 Hz and 20 Hz sinewaves at amplitude 0.5 (-6 dBFS) added together in phase. The lower waveform is the same sinewaves added togerher, but the 20 Hz sine is delayed (phase distorted). This results in drastically different waveforms, but the spectrum of the signal is still the same and it is almost impossible to hear any difference.

phase-distortion.png

Reverberation "copies" the sound with delay over and over decaying away. The sum of these copies result in certain phase, so in that sense it is "phase distortion", but the whole reverberation is MUCH more complex. For example the sound can be so short the copies do not add together at all, but are separated in time in the beginning of the reverberation (direct sound + early reflections).
 
Last edited:
Feb 23, 2023 at 7:10 AM Post #3,517 of 3,525
@castleofargh help us please!!!
I can't. Once Zeno is brought up, it's like when the Chewbacca defense is used. It makes no sense, so the jury must acquit.
That circus level parody of dialectic is kind of impressive.
 
Feb 23, 2023 at 7:20 AM Post #3,518 of 3,525
Yes, to put aside our differences in this thread:
That would be good!
we both know that the ADC stage is about capturing what the analog is capable of.
Sort of but really it’s the other way around. What “the analog is capable of” is the limiting factor of the ADC, there’s no question the ADC can capture it.
I think the main take home of the video was that the crossover filters of the speaker is what would cause the most phase distortion in the system …
Yes, provided you only consider the system and not playing it anywhere other than an anechoic chamber. Otherwise, the room the speakers are in would be the most phase distortion.
The sum of these copies result in certain phase, so in that sense it is "phase distortion", but the whole reverberation is MUCH more complex
Yes, that’s the “phase distortion” I was talking about. And let’s not forget that in a room (other than an anechoic chamber) we’re going to get null points/nodes, standing waves, etc., which are obviously a “phase distortion”.

I agree that reverb is far more complex than only phase differences/distortion.

G
 
Feb 23, 2023 at 7:23 AM Post #3,519 of 3,525
What do "we" mean by this? Does this mean you can only fill 14 bits out of 16 if you use 12 dB (2 bits) of safety margin while recording? How many bits do "we" need to fill? Who says "we" need to use 16 bits instead of 24 bit?

This thread is so active I struggle to follow the discussion and my posts are lousy and confused because of that. Sorry.
Since you're asking me directly, I will try to explain (and how there's some similarities with camera ADCs and audio ADCs, even if terms are different). Mind you this was in relation to camera systems and not these new 32bit processing audio recorders from 20bit ADC. So cameras record in a RAW format: which is a direct sensor dump, a jpeg preview, and metadata of exposure settings (and it's optimized to have best file space to have maximum quality: now there's also some "compressed" RAW formats that save space with reduced resolution). 16bit is the maximum because that's the max for the ADC. Each brand of camera has a different sensor that has a different value for noise and maximum saturation (one might record a channel that has the "black" point noise floor closer to 0, while another might be a little more...and they all have different saturation points: or a "white" point going to 65536). When I first started with digital photography, the best sensors could reach 12stops of DR (so 12bit RAW was the most optimal). Now, many cameras have sensors that are capable of 16 stops of light, and can reach *about* 16 bit at optimal situation and settings. Dynamic range in sound is dB, but in photography, we think stops of light (which is log with exposure settings and also the same for bit depth). So best theoretical setting reaches 16bit dynamic range (or 65,536 shades of tone with each color channel). That's the theoretical, like recording audio, it's quite different with your situation. To get that full dynamic range, you have to be in a bright enough environment to be at 100 ISO (easily daylight). You also might still be in an environment where you'll have areas of the scene that are blown out with that range (great example would be taking a photo indoors with a window: the window would be blown out because you're exposing for the indoors). We also have to consider exposure times: sometimes to reach a fuller exposure at a lower ISO means you can't be hand holding and you have to be on a tripod. But for most situations, you're increasing ISO to be able to expose at faster speeds in lower light (digital cameras have better sensitivity than film, and are much better at low light). Increasing ISO also logarithmically reduces dynamic range.

With digital imaging, we can extend the limits of one exposure in editing. So with my experience in 3D animation, we merge multiple exposures into 32bit (or 4.29billion shades of tone that can simulate all light levels for modeling light onto rendered models). Photographers will also merge multiple exposures if they're photographing a scene that has a dynamic range greater than their camera settings for one exposure (and can then adjust those curves with a photo that doesn't have any blown out clipping). There are also some fancy processing for video cameras, but cinematographers are experienced enough to be exposing below the 16bit RAW that many cinema cameras are capable of now (film wasn't as good, and they're used to the workflow of lens filters and aperture from previous workflows).
 
Last edited:
Feb 23, 2023 at 8:18 AM Post #3,520 of 3,525
With digital imaging, we can extend the limits of one exposure in editing.
We never do that in audio, the result would be blown speakers or ear drums. We always do the opposite, restrict the limits.
So with my experience in 3D animation, we merge multiple exposures into 32bit (or 4.29billion shades of tone that can simulate all light levels for modeling light onto rendered models).
I’m not sure how that relates to audio. The maximum dynamic range we’re ever after when reproducing audio is around 60dB which is a linear factor of 1,000 times. 4.29billion of anything doesn’t equate in audio.

G
 
Feb 23, 2023 at 8:23 AM Post #3,521 of 3,525
We never do that in audio, the result would be blown speakers or ear drums. We always do the opposite, restrict the limits.

I’m not sure how that relates to audio. The maximum dynamic range we’re ever after when reproducing audio is around 60dB which is a linear factor of 1,000 times. 4.29billion of anything doesn’t equate in audio.

G
I'm sorry that I tried fully encompassing workflows in how we generate files that are above 16bit :slight_frown: And note that I'm talking about editing: best delivery right now is 10bit displays (although the way our eyes work through accommodation, a max limit could be 20 stops).
 
Last edited:
Feb 23, 2023 at 12:11 PM Post #3,522 of 3,525
Yes, that’s the “phase distortion” I was talking about. And let’s not forget that in a room (other than an anechoic chamber) we’re going to get null points/nodes, standing waves, etc., which are obviously a “phase distortion”.

I agree that reverb is far more complex than only phase differences/distortion.

G
Okay, that is fair. I just want to make sure we apply the properties of phase distortion only to the parts of reverberation that can be interpreted (simplified) as phase distortion.
 
Feb 23, 2023 at 12:32 PM Post #3,523 of 3,525
...and of course, getting back to the original context before the extensive footnoting, phase distortion isn't an issue in conversions to and from digital signals.

How many grains of salt did it take to make the ocean salty? Scale matters.
 
Feb 23, 2023 at 12:37 PM Post #3,524 of 3,525
Since you're asking me directly, I will try to explain (and how there's some similarities with camera ADCs and audio ADCs, even if terms are different). Mind you this was in relation to camera systems and not these new 32bit processing audio recorders from 20bit ADC. So cameras record in a RAW format: which is a direct sensor dump, a jpeg preview, and metadata of exposure settings (and it's optimized to have best file space to have maximum quality: now there's also some "compressed" RAW formats that save space with reduced resolution). 16bit is the maximum because that's the max for the ADC. Each brand of camera has a different sensor that has a different value for noise and maximum saturation (one might record a channel that has the "black" point noise floor closer to 0, while another might be a little more...and they all have different saturation points: or a "white" point going to 65536). When I first started with digital photography, the best sensors could reach 12stops of DR (so 12bit RAW was the most optimal). Now, many cameras have sensors that are capable of 16 stops of light, and can reach *about* 16 bit at optimal situation and settings. Dynamic range in sound is dB, but in photography, we think stops of light (which is log with exposure settings and also the same for bit depth). So best theoretical setting reaches 16bit dynamic range (or 65,536 shades of tone with each color channel). That's the theoretical, like recording audio, it's quite different with your situation. To get that full dynamic range, you have to be in a bright enough environment to be at 100 ISO (easily daylight). You also might still be in an environment where you'll have areas of the scene that are blown out with that range (great example would be taking a photo indoors with a window: the window would be blown out because you're exposing for the indoors). We also have to consider exposure times: sometimes to reach a fuller exposure at a lower ISO means you can't be hand holding and you have to be on a tripod. But for most situations, you're increasing ISO to be able to expose at faster speeds in lower light (digital cameras have better sensitivity than film, and are much better at low light). Increasing ISO also logarithmically reduces dynamic range.

With digital imaging, we can extend the limits of one exposure in editing. So with my experience in 3D animation, we merge multiple exposures into 32bit (or 4.29billion shades of tone that can simulate all light levels for modeling light onto rendered models). Photographers will also merge multiple exposures if they're photographing a scene that has a dynamic range greater than their camera settings for one exposure (and can then adjust those curves with a photo that doesn't have any blown out clipping). There are also some fancy processing for video cameras, but cinematographers are experienced enough to be exposing below the 16bit RAW that many cinema cameras are capable of now (film wasn't as good, and they're used to the workflow of lens filters and aperture from previous workflows).
We need to be careful when comparing digital cameras to digital recording of audio. There are perhaps some similarities and mathematical/physical principles that concern both, but also aspects that make them very different. I am not much into photography myself, but I know something about these things so what you wrote wasn't all knew to me. Not sure though why we need to talk about digital cameras in this thread...
 
Feb 23, 2023 at 1:01 PM Post #3,525 of 3,525
We need to be careful when comparing digital cameras to digital recording of audio. There are perhaps some similarities and mathematical/physical principles that concern both, but also aspects that make them very different. I am not much into photography myself, but I know something about these things so what you wrote wasn't all knew to me. Not sure though why we need to talk about digital cameras in this thread...
Yes, just my synopsis with the question of why a 16bit ADC and no 24bit file. Carry on.
 
Last edited:

Users who are viewing this thread

Back
Top