Head-Fi.org › Forums › Equipment Forums › Sound Science › 24bit vs 16bit, the myth exploded!
New Posts  All Forums:Forum Nav:

24bit vs 16bit, the myth exploded! - Page 66

post #976 of 1921

Nice write up! So in layman term, there is no difference in quality between 16bit and 24bit? Just like all this pixel density beyond "retina" is just marketing gimmick?

post #977 of 1921

lets say that it is a strong working hypothesis with no widely accepted peer reviewed counter examples - for listening to music at reasonable levels

 

but as a negative proposition that "no one can hear a difference" - it is impossible to "prove"

 

you can ceratinly turn up 16bit audio until noise, some dithers are audible - but then 0 dB fs SPL would drive you out of the room


Edited by jcx - 12/17/12 at 9:01pm
post #978 of 1921
Quote:
Originally Posted by rtaylor76 View Post

The 1/2" Ampex tape and source never changed. ADC's were plugged and unplugged with the same cables accordingly. A-B comparisons were done from the computer after encoding from the digital stream. Then after tests were done, source was pulgged through same input to console straight. There might have been some coloration in the DAC, but we used the best converter for that. The RME ADI-2.

There are things going on with the analog 1/2", but that WAS the source being converted. So it should have sounded exactly the same, but it did not.

And I say again, JMO. Do some tests yourself and see if you can tell the difference. If not, then be happy with 16 bit. After that session, I was forever transformed.

Buy you didn't do the test blind, did you? Try comparing properly converted files blind and see if the difference is noticeable.

I'm not saying it won't be, it is entirely possible that the conversion was not done properly.
post #979 of 1921
Quote:
Originally Posted by jupitreas View Post


Buy you didn't do the test blind, did you? Try comparing properly converted files blind and see if the difference is noticeable.
I'm not saying it won't be, it is entirely possible that the conversion was not done properly.

 

The files were converted properly. And yes, it was not a blind test. However, when I first heard the 1/2" without conversion, just straight in, I thought it was another converter. And I said, "What converter is that? Holy Cow!" Turns out the engineer without me knowing plugged in the tape machine direct.

 

The comparison was to test converters, not analoge vs. digital. We did the digital converter test blind, and the RME ADI-2 was the cheapest out of the bunch. I think they were compared against a Lynx and another...maybe a Benchmark or a Mytek or a Lavry. I do know that an Apogee was not tested, however in another test I did hear Lavry vs. Mytek vs. Apogee, and Lavry and Apogee were strikingly similar and Mytek was pretty great.

 

I would love to go back and do a blind test to see if I could still tell. That would be rather revealing. 

post #980 of 1921

The problem I see with the assessment there is that the analog tape was transferred to digital. A lousy job in the transfer and playback due to a poor choice of parameters: level matching, dynamic range loading, compression and so on, could have been responsible for rtaylor76 impressions.

Quote:
Originally Posted by rtaylor76 View Post


The differences were not something EQ or roll-off or spectrum filtering. It was more 3 dimensional and instruments had such amazing separation and detail.

 

Equalization and roll-off have an impact in instrument separation and localization.


Edited by ultrabike - 12/18/12 at 11:40am
post #981 of 1921
Quote:
Originally Posted by rtaylor76 View Post

It likely was the ATR-102, however, none of that stuff matters. The source for conversion was the output of the tape from the tape machine. So any EQ, emphasis, or otherwise mojo of that machine/tape, would have been encoded digitally. And yes, it was calibrated.

 

Actually I'm a bit confused. Was the tape player "calibrated" and an EQ applied? Did you hear the output of the player directly out of the tape player, or out of the EQ? Was the digital recording compensated using the same EQ?

 

Quote:
Originally Posted by bigshot View Post

I worked with 24 track masters and fullcoat film back around the transition to digital, and the difference between digital and analogue is in the peaks. We occasionally burned a peak into the tape and the sound didn't suffer. When we took the tape to digital, we had to drop the level or it would clip very badly. Comparing digital to analogue would require careful level matching. I would bet that the tape master was a little hotter.
If you were patched direct, the analogue was definitely hotter.

 

Quote:
Originally Posted by rtaylor76 View Post

Yeah, you can push tape above 0 db on the meter and get nice soft compression. Very popular and preferred for the "tape" sound and high saturation. Especially with pop and rock music.

 

I don't know what the true levels were. It was too long ago. And yes, I understand even a slight change in volume can make a difference.

 

So it seems things might have been a little hotter, levels might not have been properly accounted for, and the analog might have gone into saturation (which seems to be preferred)? and quite possibly things in the digital clipped? No wonder things sounded differently confused_face.gif


Edited by ultrabike - 12/18/12 at 11:45am
post #982 of 1921

You don't need to EQ a four track tape to be flat. It already is.

 

This one rings true to me. I'm positive the engineer did everything right. My guess is that he dropped the level a hair when he dubbed to digital to avoid clipping, but when he played it back, he used the same patch for both sources, so the tape was a bit louder.


Edited by bigshot - 12/18/12 at 11:56am
post #983 of 1921

Thanks dude. I'm not an audio engineer so I'm a little lost here. I'm still not clear what it's meant when someone says "it was calibrated" in this context.


Edited by ultrabike - 12/18/12 at 12:03pm
post #984 of 1921

A couple of different things... on a tape deck it's the angle that the tape goes past the heads (that's called "azymuth" or something like that.) The bias can also be calibrated to the particular tape stock being used. The last thing you calibrate is the level. You run a test tone and use that to line the deck up to the level of the master. I believe you set the test tone lower on digital than you do analogue to protect from clipping.

 

Response is calibrated on the other end of the chain... just before the speakers.

post #985 of 1921

So the difference in perception here is more than likely due to volume level?

post #986 of 1921

That would be my guess. Hard to say without actually being there, but if I was an engineer patching two masters and switching between them, I'd probably patch them direct into the same pot. Even if he lined up to the reference tone on the head of the tape and patched them through two separate channels, it still might have been different because the digital master would have been transferred lower to avoid clipping. (85% of peak?)

 

A louder volume would have made it sound punchier and more present. The bass would have been a tiny bit stronger too. The difference probably wouldn't be huge, but it would be noticeable.

 

This is nice, because I can tell rtaylor76 is actually describing something that he actually heard. So many times I hear people describing tests like this that have the aroma of figments of the imagination concocted to make their point. (If you know what I mean.)


Edited by bigshot - 12/18/12 at 12:18pm
post #987 of 1921

Thanks for the vote of confidence.

 

I doubt if it was a question of level. The ADC's used have no input level control. Basic plug and play. Many ADC's, especially high end ones, have soft or even hard limiters at 0db, so it dosen't go into digital distortion territory.

 

And there was no EQ. Nothing plugged up between tape machine and ADC. Only XLR cables. Tape machine -> ADC -> console to stereo input. For the 1/2", the XLR cables going into the converter were then plugged straight into the cables going to the console, bypassing the converter. What I meant by EQ was tonal balance.

 

One thing I will will bring up that you can feel free to discuss is the theory I have of resolution, detail, and volume level. We all know that digital is "stair steps," and that condition gets worse as the volume level is lower. So technically a 24-bit file could possibly have the same resolution of 16-bit because it is too low, and not taking advantage of the extra dynamic range. In this case, imagine the 24-bit file being just a tad quieter on the recording, but then level matched during playback. The 24-bit file in this case could also take advantage of that extra dynamic range in the peaks, but really it almost the same resolution of the 24-bit file. Make sense?

 

Now on this same concept, detail is in the quieter parts. This is the area that gets more "stair steppy" and thus distorted, but then masked by dither. I have always thought that digital to me loses it in the finer details. Not because of sampling rate, nyquist theories, filtering, 20Hz cut-off, but more due to the details, the quieter parts, have less resolution. Now the same can be argued against any analog medium, say tape, that it also has noise and loses resolution in quieter passages. And I would say that is true, but it does not introduce distortion and masking the way digital does.

 

Now back to 24 vs 16 bit per the OP - my question is, does it matter? In stereo files that are congested, maybe not to the extent that we think. Maybe we can't always tell and spot the difference. I do know that almost every recording done today is at 24 bit, but they are tracking everything and need as much detail to fit down to two stereo tracks. Does that track need to be 24 bits? Do we have the system to tell? Do we have the ears to hear it? All questions we must ask ourselves. I know I have several recordings I love that would not benefit me at all in higher resolution. It takes true talent to get and demonstrate something more out of a particular recording. And it is not just one thing, either the medium, format, tracking engineer, producer, mixing, mastering, talent, but all of it. Just as there are many more recordings I would love to hear in a higher resolution format. To me it is not just a comfort thing.

 

I know I am new here, but as someone involved in audio for awhile, I find it hard to think many break things down to just audio spectrum or high frequencies. It is much more than that. Dimension, space, detail, impact, bandwidth, all come well before say any such high frequency information is there. A good recording should sound 3D and have depth for days. And not just wide, like deep space.

post #988 of 1921
Quote:
Originally Posted by rtaylor76 View Post

Thanks for the vote of confidence.

 

I doubt if it was a question of level. The ADC's used have no input level control. Basic plug and play. Many ADC's, especially high end ones, have soft or even hard limiters at 0db, so it dosen't go into digital distortion territory.

 

And there was no EQ. Nothing plugged up between tape machine and ADC. Only XLR cables. Tape machine -> ADC -> console to stereo input. For the 1/2", the XLR cables going into the converter were then plugged straight into the cables going to the console, bypassing the converter. What I meant by EQ was tonal balance.

 

One thing I will will bring up that you can feel free to discuss is the theory I have of resolution, detail, and volume level. We all know that digital is "stair steps," and that condition gets worse as the volume level is lower. So technically a 24-bit file could possibly have the same resolution of 16-bit because it is too low, and not taking advantage of the extra dynamic range. In this case, imagine the 24-bit file being just a tad quieter on the recording, but then level matched during playback. The 24-bit file in this case could also take advantage of that extra dynamic range in the peaks, but really it almost the same resolution of the 24-bit file. Make sense?

 

A properly implemented DAC does not result in stair steps. Also digital is better conceptualized as digital impulses, instead of "stair steps." Digital samples, that ideally should be taken at more than twice the signal bandwidth, should be properly interpolated through a low pass filter. The resulting signal from the DAC should be very close to the original. Quantization error due to the resolution of the digital signal is a different matter (i.e. not sampling rate.) There is quite a bit of headroom in 16-bits let alone 24-bits. There is however some dependencies on the dynamic range of the ADC when digitizing a waveform.

 

Now on this same concept, detail is in the quieter parts. This is the area that gets more "stair steppy" and thus distorted, but then masked by dither. I have always thought that digital to me loses it in the finer details. Not because of sampling rate, nyquist theories, filtering, 20Hz cut-off, but more due to the details, the quieter parts, have less resolution. Now the same can be argued against any analog medium, say tape, that it also has noise and loses resolution in quieter passages. And I would say that is true, but it does not introduce distortion and masking the way digital does.

 

Tape and analog in general also have bandwidth limitations imposed by the pick up head and so forth. If the original sound signal does not have energy above 20kHz then nothing is lost given sufficient sampling rate. Same could be said about a 40kHz signal if sampled at say 96kHz or above. Note that sampling rate, Nyquist frequency and so on are not the same thing as quantization noise. One could assign 48-bits of resolution through an ADC that is sampling at 44.1kHz, or 16-bits of resolution through an ADC that is sampling at 192kHz. Background and thermal noise can be dominant over quantization noise.

 

Now back to 24 vs 16 bit per the OP - my question is, does it matter? In stereo files that are congested, maybe not to the extent that we think. Maybe we can't always tell and spot the difference. I do know that almost every recording done today is at 24 bit, but they are tracking everything and need as much detail to fit down to two stereo tracks. Does that track need to be 24 bits? Do we have the system to tell? Do we have the ears to hear it? All questions we must ask ourselves. I know I have several recordings I love that would not benefit me at all in higher resolution. It takes true talent to get and demonstrate something more out of a particular recording. And it is not just one thing, either the medium, format, tracking engineer, producer, mixing, mastering, talent, but all of it. Just as there are many more recordings I would love to hear in a higher resolution format. To me it is not just a comfort thing.

 

I think it's more a question of how the signal was processed. One can compress the life out of a recording and store it at 192 bits per sample. Compare the results with a properly recorded and dynamic range preserved recording stored at 16 bits per sample.

 

I know I am new here, but as someone involved in audio for awhile, I find it hard to think many break things down to just audio spectrum or high frequencies. It is much more than that. Dimension, space, detail, impact, bandwidth, all come well before say any such high frequency information is there. A good recording should sound 3D and have depth for days. And not just wide, like deep space.

 

When playing back a recording, dimension, space, detail, impact and bandwidth are influenced strongly by frequency response of the playback components. I however feel that these qualities (dimension, space ...) are more often than not a function of how the original recording was produced.


Edited by ultrabike - 12/18/12 at 3:08pm
post #989 of 1921
Quote:
Originally Posted by ultrabike View Post
 A properly implemented DAC does not result in stair steps. Also digital is better conceptualized as digital impulses, instead of "stair steps." Digital samples, that ideally should be taken at more than twice the signal bandwidth, should be properly interpolated through a low pass filter. The resulting signal from the DAC should be very close to the original. Quantization error due to the resolution of the digital signal is a different matter (i.e. not sampling rate.) There is quite a bit of headroom in 16-bits let alone 24-bits. There is however some dependencies on the dynamic range of the ADC when digitizing a waveform.

You are talking Nyquist here, understood. However, there is still so many samples per complicated part in quieter passages.

 

Tape and analog in general also have bandwidth limitations imposed by the pick up head and so forth. If the original sound signal does not have energy above 20kHz then nothing is lost given sufficient sampling rate. Same could be said about a 40kHz signal if sampled at say 96kHz or above. Note that sampling rate, Nyquist frequency and so on are not the same thing as quantization noise. One could assign 48-bits of resolution through an ADC that is sampling at 44.1kHz, or 16-bits of resolution through an ACD that is sampling at 192kHz. Background and thermal noise can be dominant over quantization noise.

 

The bandwidth limitations of the pick-up, record head, pre-emphasis eq, bias, etc. are all imposed by the very nature, not forced through filters just because. I am not worried about anything above 20k. I doubt I can hear anything above 15k. But how accurate is that high end? How fast is it? If RMAA says it can go to 30k, then to me at lower frequencies, it is more accurate. Not always the case, but most likely. And yes, system noise from background or thermal noise can be higher than quantization or dither noise. Even our own listening environments have high noise floors. However, the noise is encoded in the file and always there to mask the distortions.

 

I think it's more a question of how the signal was processed. One can compress the life out of a recording and store it at 192 bits per sample. Compare the results with a properly recorded and dynamic range preserved recording stored at 16 bits per sample.

 

Agreed. I have no issues with this.

 

When playing back a recording, dimension, space, detail, impact and bandwidth are influenced strongly by frequency response of the playback components. I however feel that these qualities are more a function of how the original recording was produced.

 

Here is where we can differ, and that is fine. I can see that frequency response can affect dimension, space, detail, impact and bandwidth. However, with two different speakers with the same measured frequency responses, might sound worlds different in these areas, but be actually representing the full frequency spectrum. One might sound flat, dead, and honky, and the other deep and infinite. Or one could sound in your face and full, but have no depth. Certain gear can do this as well, not just speakers.

 

Now I do agree that how the original recording was produced also influences these factors - very heavily. That is why to always have good tracks to use to evaluate our gear with. Otherwise, what is the point?

post #990 of 1921
Quote:

Originally Posted by rtaylor76 View Post

 

One thing I will will bring up that you can feel free to discuss is the theory I have of resolution, detail, and volume level. We all know that digital is "stair steps," and that condition gets worse as the volume level is lower. So technically a 24-bit file could possibly have the same resolution of 16-bit because it is too low, and not taking advantage of the extra dynamic range. In this case, imagine the 24-bit file being just a tad quieter on the recording, but then level matched during playback. The 24-bit file in this case could also take advantage of that extra dynamic range in the peaks, but really it almost the same resolution of the 24-bit file. Make sense?

Sorry, not to me.

 

Quote:
Now on this same concept, detail is in the quieter parts. This is the area that gets more "stair steppy" and thus distorted, but then masked by dither.

That's now how it works. Dither ensures that there is no quantization distortion. The level of dither, as posted just a page back, is extremely low.

Even if you record at 24 bits you've most probably recorded noise that is magnitudes higher in level than dither.

 

Quote:
I have always thought that digital to me loses it in the finer details. Not because of sampling rate, nyquist theories, filtering, 20Hz cut-off, but more due to the details, the quieter parts, have less resolution.

That's why we have dither.

 

Quote:
Now the same can be argued against any analog medium, say tape, that it also has noise and loses resolution in quieter passages. And I would say that is true, but it does not introduce distortion and masking the way digital does.

Yeah, most likely a lot more noise, harmonic and intermodulation distortion.

 

 

Quote:

Now back to 24 vs 16 bit per the OP - my question is, does it matter? In stereo files that are congested, maybe not to the extent that we think. Maybe we can't always tell and spot the difference. I do know that almost every recording done today is at 24 bit, but they are tracking everything and need as much detail to fit down to two stereo tracks. Does that track need to be 24 bits? Do we have the system to tell? Do we have the ears to hear it? All questions we must ask ourselves. I know I have several recordings I love that would not benefit me at all in higher resolution. It takes true talent to get and demonstrate something more out of a particular recording. And it is not just one thing, either the medium, format, tracking engineer, producer, mixing, mastering, talent, but all of it. Just as there are many more recordings I would love to hear in a higher resolution format. To me it is not just a comfort thing.

If you take a 24 track, convert it to 16 bit and subtract one from the other and finally do a spectrum analysis you should see only noise at about -125 to -135 dBFS.

 

 

Quote:

I know I am new here, but as someone involved in audio for awhile, I find it hard to think many break things down to just audio spectrum or high frequencies. It is much more than that. Dimension, space, detail, impact, bandwidth, all come well before say any such high frequency information is there. A good recording should sound 3D and have depth for days. And not just wide, like deep space.

Digital audio are just a bunch of samples. If we compare sample by sample and see that the differences are at an extremely low level ... Btw, the things you mention have more to do with recording, not the format (used for playback, which this thread is about).


Edited by xnor - 12/18/12 at 3:23pm
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Sound Science
Head-Fi.org › Forums › Equipment Forums › Sound Science › 24bit vs 16bit, the myth exploded!