What it means to go beyond 16 bits of resolution
Jun 6, 2005 at 10:01 PM Thread Starter Post #1 of 23

tangent

Top Mall-Fi poster. The T in META42.
Formerly with Tangentsoft Parts Store
Joined
Sep 27, 2001
Posts
5,969
Likes
58
I just read an article that gave many examples of what can be done with just over 17 bits of resolution. Read it, so that the next time you're trying to compare the relative merits of an 18-bit DAC to a 20-bit DAC, you'll have an idea of how silly such specs are.
 
Jun 6, 2005 at 10:58 PM Post #3 of 23
Quote:

Originally Posted by dsavitsk
It's cute, but the analogies don't necessarilly hold up.


Your objection is weak. You just place the human ear on a golden pedestal, as though that's sufficient argument.

What does it mean that the ear is a delicate instrument? That there is no limit at all? Certainly not. We run into practical physical limits before we get to the -144 dB noise floor "promised" by the current hi-res formats. A single 24K resistor generates more noise than you can tolerate for true 24 bit resolution relative to a 1 Vrms signal. The resistances in a typical audio chain add up to a lot more than that, and you have to add in all the other noise sources on top of that.

Where's the limit? Just going from 16 to 17 bits doubles the number of states you can encode. That's a geometric function. Going from 8 bits to 16 bits is a huge improvement. Going from 16 to 24 is a bigger improvment in pure numerical terms, but it's not necessarily as big an improvement in practical terms.
 
Jun 6, 2005 at 11:03 PM Post #4 of 23
all my DACs are 16,18 and 20 bit even though I have built 34 bitters and i am more than satisfied with 18.

But on the AD Converter side nothing less than 24 bits will do for an actual live recording straight off the mics just for the headroom it allows.The "actual" usable bitrate is really about 20 bits.

digital is a pain in the butt to do and even though i have been successful i in no way can honestly say i truly understand it.
It is here to stay so I make the best of what is offered and deal with it the best way I know how.
 
Jun 6, 2005 at 11:10 PM Post #5 of 23
Quote:

Originally Posted by rickcr42
But on the AD Converter side nothing less than 24 bits will do for an actual live recording straight off the mics just for the headroom it allows.


No objection on the recording side. I'm only talking about the playback side.
 
Jun 7, 2005 at 12:29 AM Post #7 of 23
While I agree with what's been said I also agree with dsavitsk. An example would be the limit of human hearing, 20khz. The tweeters in B&Ws top end speakers recently had an overhaul which raised the breakup frequency from 30khz to 70khz which had no measurable effect on the linearity in the audible range. Yet the new speakers have a significantly more musical top end.

I'm not sure what i'm trying to say but it's along the lines of although the ear has it's limits, what we do beyond the resolution of the ear has an audible effect as well.

That said I also agree with tangent's ided that ultimatly after all other circutry is added the extra resolution becomes unusable anyway.
 
Jun 7, 2005 at 12:39 AM Post #8 of 23
Quote:

Originally Posted by tangent
Your objection is weak. You just place the human ear on a golden pedestal, as though that's sufficient argument.


Not at all. There are clearly limits. I am simply saying that there is nothing about the analogies to suggest where those limits are. Maybe they are at 16 bits, but maybe at 18, or maybe 14. While I don't know, I am not going to rely on these analogies to make that determination.
 
Jun 7, 2005 at 2:08 AM Post #9 of 23
Following the same logic as the article, we'd all still be using 16bit OSes. I mean who really ever uses a whole megabyte of address space. Keep in mind that each bit doubles the size, so 16bit to 24bit isn't 50% more specific, it's 25600% more. (Someone correct my math, if I'm wrong)

Remember the head-fi motto, "Because I can"
 
Jun 7, 2005 at 5:00 AM Post #10 of 23
Quote:

Originally Posted by __redruM
Following the same logic as the article, we'd all still be using 16bit OSes.


Absolutely 100% wrong.

The article isnt comparing 4bit to 20bit (for example), it is comparing 16bit to 20bit.

Following a path of logic that is as close to the article as i can think of is 1024bit OS's shouldnt be used.

Rob.
 
Jun 7, 2005 at 1:55 PM Post #11 of 23
Quote:

Originally Posted by __redruM
Following the same logic as the article, we'd all still be using 16bit OSes. I mean who really ever uses a whole megabyte of address space...


16 bit addressing is tiny. That's not enough even for little DSP chips.

But a 16 bit OS refers to a lot more than the addressable space. It is also the length of an instruction. By the time you include the opcode and the parameters, 16 bits are gone in a hurry. Of course you could always call multiple instructions to do one operation but that would hinder processing speed.

edit: About the article, I think dsavitsk got it perfect. It's a cute little analogy but the entire argument is flawed and, frankly, meaningless. Small quantaties matter these days.
 
Jun 7, 2005 at 2:34 PM Post #12 of 23
Quote:

Originally Posted by robzy
Absolutely 100% wrong.

The article isnt comparing 4bit to 20bit (for example), it is comparing 16bit to 20bit.

Following a path of logic that is as close to the article as i can think of is 1024bit OS's shouldnt be used.

Rob.



So the article can use extreme unrealted examples but I can't, who sets these rules?

Read the article again, it isn't comparing 16 bit to 20bit. It describes how a 24bit system gives 17.24 bits of "flicker free" data.

I'm not a ADC expert, so maybe I'm missing something.
 
Jun 7, 2005 at 2:45 PM Post #13 of 23
Quote:

So the article can use extreme unrealted examples but I can't, who sets these rules?


not if those rules have nothing to do with the context of the discussion.Just like measuring an amp has no correlation to end sound comparing computer requirements to audio requirements has no rlationship at all.
Better to compare analog to digital (also no sense) which would at least be comparing two audio formats.

A 16 bits is fine for music REPRODUCTION even though it takes a 18 or 20 bit DAC to get there all the hoopla aside but not even close for the recording end where 48 or even 96 bits is about right.

a computer resolves data far different than does the human ear and a lost bit could mean a serios error but in music we have bit errors all the time and never even notice mostly.The "missing bit" sonics are less on the scale of audibility than simply swapping from senns to grados.the musaic is recognzable for who it is but still the two do NOT sound thesame.In computers X must ALWAYS equal X
 
Jun 7, 2005 at 2:47 PM Post #14 of 23
Before digitizing analog audio for myself, I had always thought that the 44.1 K sample rate was the main weakness of CD audio, not the 16 bits bit depth. I was wrong.

There's a right and a wrong way to digitize CD audio from analog.

The right way: Digitize to 24 bits. Make all intermediate calculations at 32 bits. Dither down to 16 bits for output.

In other words, use a software option that rounds to one of the two adjacent 16 bit values at random, rather than simply to the nearest 16 bit value. There are various interpretations of "at random", but all are said to effectively sound like rounding to the nearest 19 bit value.

The wrong way: Make all intermediate calculations at 16 bits, or round to the nearest 16 bit value. There is actually software out there, e.g. for removing record scratches and tape hiss, that insists on working with 16 bit precision only. Such software is utter trash.

One can hypothesize all night long on why dithering works, but the I could hear a dramatic difference.

I'd venture to say that most of what we don't like in the sound of our rigs is the 16 bit source material. Read what anyone says after listening to decent 24 bit DVD audio. I can't wait for storage capacities to increase and lawyers to retire so we can listen to 24 bit iPods.

Meanwhile, I made the assertion years ago on various forums that we should be compressing audio directly from 24 bit sources, for better sound quality. Most audio compression formats don't make any internal reference to the bit depth, they're just approximate information about the wave forms as continuous data. Then, all it would take would be one enterprising company to design an audio player that used more than 16 bits in playback, to get even better sound quality.

I dispute any contention that the 17th bit doesn't matter. Nothing in the linked article surprises me or seems relevant. Our ears are amazingly sensitive systems.
 

Users who are viewing this thread

Back
Top