24bit vs 16bit: How big is the difference?
Mar 31, 2008 at 8:14 PM Post #121 of 773
Quote:

Originally Posted by Crowbar /img/forum/go_quote.gif
Any meaning derived from this obviously depends on ability and training of the subjects.


Is that anything like training yourself to fly by jumping off taller and taller ladders?

There's a point where our hearing just can't hear it, no matter how much we struggle and strain in a vain attempt to educate our ears.

This all comes down to whether you are building a system for bats to listen to or one that does the job for you. As I said before, the balance of the frequencies between 200Hz and 8kHz is infinitely more important to sound quality than the frequencies that lie outside the range of human hearing.

See ya
Steve
 
Mar 31, 2008 at 8:16 PM Post #122 of 773
Quote:

Originally Posted by Crowbar /img/forum/go_quote.gif
The studies cited above prove there is something, whether in the ears or elsewhere in the head, that DOES respond to >20 kHz


But it has nothing to do with listening to music.

When I was a kid, I hated to go to Sears, because they had banks and banks of florescent lights that would put out a high pitched squeal that would give me a headache. I couldn't really hear the squeal, but I could feel the pain it caused.

See ya
Steve
 
Mar 31, 2008 at 8:22 PM Post #123 of 773
Quote:

Originally Posted by grawk /img/forum/go_quote.gif
Trained audio engineers are more likely to have hearing damage than the average person, I'd be willing to bet...


I've found that a lot of them aren't very good at judging mixes either. I remember one mix I was supervising where the director heard a bump where the mixer had messed up with one of his pots. He asked the engineer to fix it, and he ran over and over the section leaning over the board peering at his VU meters. "Nope. No bump there." The director growled at me, and I gently suggested that the engineer stop looking at the meters and just sit back and listen. "Oh! I hear it now!"

If it wasn't on his meter, he couldn't hear it.

See ya
Steve
 
Mar 31, 2008 at 8:59 PM Post #124 of 773
I have played around with the high-res and 16/44.1 files a lot, more than I can really afford to. From what I see the only real difference and the only thing that made the two files audibly different was the opening 1/10th of a second of the opening guitar chord, for the remainder of the tracks the waveforms are identical unless you zoom to the 1/10,000 of a second scale at seconds 10ths of seconds 100ths of seconmds and 1000ths of seconds these are identical. My feeling is that the difference and it really is tiny is an artifact rather than an indication of better resolution.

Also the zooming is a bit of a red-herring, when you zoom to the maximum both 24/96 and 16/44.1 waveforms become completely flat lines and there is no difference between them at all, well give or take about 0.001db
biggrin.gif


When however you use Audacity to look at the first 0.15 of a second and view it as waveform DB the opening chord is drastically different on the two samples for the first 1/10th of a second, from that point of view these recordings simply do not look the same, for the first 1/10 of a second the 24/96 sample shows a lot of squaring off of the bottom of the wave like it is truncated while the 16/44.1 the wave descends normally (?).




This is interesting. After that point though they are identical.
 
Apr 1, 2008 at 12:07 AM Post #125 of 773
Quote:

Originally Posted by bigshot /img/forum/go_quote.gif
But it has nothing to do with listening to music.

When I was a kid, I hated to go to Sears, because they had banks and banks of florescent lights that would put out a high pitched squeal that would give me a headache. I couldn't really hear the squeal, but I could feel the pain it caused.

See ya
Steve



It contributes to the experience that the sound creates. Whether it is pleasant or not is a purely subjective evaluation and so has no relevance to the discussion.
 
Apr 1, 2008 at 5:30 PM Post #126 of 773
Quote:

Originally Posted by Crowbar /img/forum/go_quote.gif
It contributes to the experience that the sound creates. Whether it is pleasant or not is a purely subjective evaluation and so has no relevance to the discussion.


Listening to music is a subjective experience. Figuring out how to make your stereo sound good is an objective one. You shouldn't confuse those two. Super high frequency sound in recorded music is almost always noise.

See ya
Steve
 
Apr 1, 2008 at 5:36 PM Post #127 of 773
Quote:

Originally Posted by bigshot /img/forum/go_quote.gif
Listening to music is a subjective experience. Figuring out how to make your stereo sound good is an objective one. You shouldn't confuse those two. Super high frequency sound in recorded music is almost always noise.

See ya
Steve



May I suggest that figuring out how to make your stereo sound good is also an subjective one.
tongue.gif
 
Apr 1, 2008 at 6:59 PM Post #128 of 773
Putting together a stereo isn't creative. It's about achieving optimal fidelity. You do that by understanding how sound works and how your equipment works and applying what you know to make the system perform at its best. That isn't a subjective process.

If you try to put together a stereo system randomly based purely on subjective reactions, you'll be at the mercy of how you feel at any particular time and what you ate for dinner. You'll also be prime prey for commissioned salespeople. (They don't want you to think. They just want you to put your feet up and take their word for it.)

See ya
Steve
 
Apr 1, 2008 at 7:31 PM Post #129 of 773
It's a shame this disscussion is distracted by wether sound beyond 20 khz is important. I happen to believe that it is not.

I found a way to extract the raw waveform last night and have some nice data comparing the power frequency spectra (PFS) of the 16/44 and 24/96 audio.

What I found is similar to what I showed in the first post, only this time with greater detail. Of the first few second of the audio track most of the PFS energy is in the 64 to 1024 hz range. The magnitude and distribution of the 16/44 accoustic energy at several key harmonics is distorted relative to the 24/96 audio.

Also what was surprising is there is several harmonics with signifigant energy that are artifiicial. These effects are probably what is refered to as quantization noise.

Bottom line is, there is signifigant evidence which suggest that the enhanced resultion in the audible band found in the 24/96 audio is indeed critical to fidelity. My ears are not misleading.
 
Apr 1, 2008 at 7:35 PM Post #130 of 773
Quote:

Originally Posted by bigshot /img/forum/go_quote.gif
Putting together a stereo isn't creative. It's about achieving optimal fidelity. You do that by understanding how sound works and how your equipment works and applying what you know to make the system perform at its best. That isn't a subjective process.

If you try to put together a stereo system randomly based purely on subjective reactions, you'll be at the mercy of how you feel at any particular time and what you ate for dinner. You'll also be prime prey for commissioned salespeople. (They don't want you to think. They just want you to put your feet up and take their word for it.)

See ya
Steve



But that's not the audiophile way as witnessed in the cables forum
tongue.gif
Of course, I've never considered myself an audiophile and these forums have reinforced that.

I must say that this is an incredibly interesting thread, as well as intimidating. I'm planning to start digitizing some of my LPs in the not too distant future. While I knew it would be some work, this thread is making me realize that it's a lot more work than I knew. And maybe cost more money than I thought. It's not going to stop me from doing it, just raising the intimidation factor a bit.
 
Apr 1, 2008 at 7:41 PM Post #131 of 773
Interesting arguments. Interesting because some of them are based on a perception of fact rather than the reality. For example, when looking at a waveform on a computer screen, what are you actually looking at? You're looking at a graphical representation of the digital data stored in the audio file. What you are not looking at is a representation of what the analogue waveform will look like once it's come out of a DAC. This is obvious if you think about it, how could a piece of software emulate the effects of all the different processes that take place in every DAC on the market? In other words, of course a 44.1kFs/s file is going to look less detailed than a 96kFs/s file, the question is, is it any less accurate once it's converted back to an analogue waveform? The answer is no! The answer has to be no, otherwise the whole theory of digital audio is wrong and digital audio doesn't exist!! You need to have two sampling points per waveform in order to perfectly recreate that waveform. Having more than two points is not going to make the recreated waveform more perfect. Hence why the Nyquist theorem states that you need to have twice the sampling frequency to encode a given audio frequency.

In response to freqs >22kHz having an affect on the freqs in the hearing range. Possibly but what difference does it make? If anything is affected in the hearing range, those effects would be encoded at 44.1kFs/s as well as they would at 96 or 192.

I routinely use a system that has 48bit resolution, that's 288dB dynamic range. So it must sound wicked compared to 24bit when listening to completed mixes, err no. It makes no difference whatsoever, nor does comparing my 48bit system with 16bit. It's not unusual to find pop songs with a dynamic range of less than 10dB, for classical it's usually less than 50dB. 96dB (16bit) is more than enough, why do you need 144dB (24bit)? Maybe you want to hear the tuba player's nose hairs vibrate, just before he plays a note and vapourizes your eardrums!

Why do some of you refuse to believe that intrinsically 24/96 as a consumer format is no better than 16/44.1 and that any perceived difference is an effect of a DAC?

My guess is it's because it's difficult to get past the logical (but incorrect) assuption that more data must mean more detail and therefore better quality.

Crowbar - The idea of recording is not to make the recording sound identical to the live performance. Most pop music cannot be performed acoustically. Even with classical music this statement is incorrect. You go listen to a french horn, tuba or even flute, up close. It sounds nothing like it does in a big concert hall from an audience point of view, but we can't put the mic too far away or we'll get SNR problems and no clarity. So we have to fake it, we make value judgements about the perception of our target demographic and then we work to thier expectation. We also have to fake it because recording equipment is far from perfect (sometimes deliberately so) and we have to make compensations.
 
Apr 1, 2008 at 7:58 PM Post #132 of 773
Quote:

Originally Posted by bigshot /img/forum/go_quote.gif
Putting together a stereo isn't creative. It's about achieving optimal fidelity. You do that by understanding how sound works and how your equipment works and applying what you know to make the system perform at its best. That isn't a subjective process.

If you try to put together a stereo system randomly based purely on subjective reactions, you'll be at the mercy of how you feel at any particular time and what you ate for dinner. You'll also be prime prey for commissioned salespeople. (They don't want you to think. They just want you to put your feet up and take their word for it.)

See ya
Steve



Not exactly randomly though. How a system sounds is largely subjective. Everyone's ears are different, and each has ones own perception of "optimal fidelity". Optimal to one does not mean optimal to another. But then lets not confuse a "clinical setup" to one that suits each person's "subjective" taste. Both types of setups are equally difficult to achieve, one by using test equipments and one by using ears. As well, lets remind ourselves of experience that a showroom system sounds very different in our homes. That's why the hi-fi industry has continued to survive over the generations and continued to pump out "better" products.
 
Apr 1, 2008 at 8:20 PM Post #133 of 773
Quote:

Originally Posted by gregorio /img/forum/go_quote.gif
Interesting arguments. Interesting because some of them are based on a perception of fact rather than the reality. For example, when looking at a waveform on a computer screen, what are you actually looking at? You're looking at a graphical representation of the digital data stored in the audio file. What you are not looking at is a representation of what the analogue waveform will look like once it's come out of a DAC. This is obvious if you think about it, how could a piece of software emulate the effects of all the different processes that take place in every DAC on the market? In other words, of course a 44.1kFs/s file is going to look less detailed than a 96kFs/s file, the question is, is it any less accurate once it's converted back to an analogue waveform?


So basically the zooming in on the waveforms tells you precisely nothing since the software does not show the effects of the reconstruction filter which makes the output whole again.

Just one question then, when I zoomed into the 24/96 I saw what looks suspiciously like truncation, a flattening of the energy (see my post a page or so back) on the 24/96 while the 16/44.1 descends gracefully, what does this mean ?, is this an artifact, this seems to be the only thing different on the two samples, could it be that the 24/96 ADC isnt responding correctly on the first guitar crash ?
 
Apr 1, 2008 at 9:10 PM Post #134 of 773
Nick - "So basically the zooming in on the waveforms tells you precisely nothing since the software does not show the effects of the reconstruction filter which makes the output whole again."

Correct. The graphical display is a real approximation. Zoom all the way in so you can see the individual samples. The very fact that you can see individual samples is telling you that what you are looking at digital data, not the smooth analogue waveform that is going to come out of your DAC. Every DAC has different filters, different processes and different analogue circuitry. There is no way for your software to know what is going to happen to the digital datastream once it has passed out of RAM and is routed to a DAC.

A squared off wave usually indicates that clipping has occurred somewhere in the recording or mixing process. Clipping is when the amplitude (gain) of the waveform exceeds 0dBFS. Normally there is an obvious click or digital distortion when the signal has been clipped.
 
Apr 1, 2008 at 9:16 PM Post #135 of 773
Quote:

Originally Posted by CyberTheo /img/forum/go_quote.gif
Not exactly randomly though. How a system sounds is largely subjective. Everyone's ears are different, and each has ones own perception of "optimal fidelity".


Everyone's ears are different on different days and under different situations. That's why using them as your only guide will lead you back and forth randomly.

Optimal fidelity has nothing to do with how we perceive sound. It is the ability to reproduce recorded sound as close to the way it was intended as possible. You do that by trying to get low signal to noise and distortion, accurate dynamics and balanced frequency response.

Everything is tempered by practicality. For example, it might be possible to reproduce beyond the range of human hearing, but why waste effort on things you can't hear?

If you want me to just say, "Whatever floats your boat is OK for yourself" then consider it said. But if you want to offer advice to other people about how they might achieve optimal sound with their particular ears, you're going to be a lot more useful to them if you stick to objective things than if you dwell on subjective ones that probably don't apply to them.

See ya
Steve
 

Users who are viewing this thread

Back
Top