wyager
Head-Fier
- Joined
- Nov 1, 2010
- Posts
- 90
- Likes
- 0
Quote:
Quote:Think about it like this. Let's say that the maximum voltage your ipod can produce is 1.27 volts (as we learned might be more accurate). This means that, with an 8 bit audio file, the minimum voltage change is .01v. Now, if we used a 16 bit audio file instead, the sample rate stays the same (44100hz) and the max voltage stays the same, but the minimum voltage change is now .0000390625v, which is a much more accurate reproduction of the real sound than 8 bit audio can provide.
Well, that's the question, if it really works that way ... and shouldn't we consider the kind of DAC used, 16 bit or 8 bit or even 24 bit?
Your simple mathematics seem right (well, see below), it's also almost the same as mine in my first posts here.
But I'm not sure anymore if it is really handled that way by the ADC/DAC and the other components analog and digital involved.
Also I have the impression as if a 24 bit file on my 801 sounds more loud than a 16 bit file with the same volume setting.
And what happens if you plug an amplifer into your ipod which is capable of 5 volts or more (... and let*s not forget playing such a song via a dektop amplifier and 200 watt speakers ...)... would it just thin the sound?
(... 8 bit = 256 values ... 1.27 / 256 = 0.005 ... or do I miss something ?)
That's true, the kind of DAC used may affect the output. You say that a 24 bit file sounds louder-but does it sound 256 times louder? That would mean it is acting as the OP says. And yes, if that amp increased the volume then I see no reason why the steps wouldn't be even more noticeable. And yes, for your last equation you are forgetting that the (hypothetical, but realistic) voltage range is actually +1.27 to -1.27, which makes for a total difference of 2.54 volts or .01v per step.
I'm not sure if this will make sense to anyone here, but the way I think of it whichever device you're using just puts the MSB of your 24/16/8/whatever bits in the MSB of the DAC's output register and makes the rest zeros. The way the original post says it would work is if the LSB of the audio file was matched to the LSB of the DAC (which seems unrealistic). So, let's say that your DAC is some 8 bit microcontroller (as that is both likely and simple to match up to 24/16/8 bit audio), and the song you are listening to is a 24 bit audio file. The current voltage in 24 bit binary could be 10101010, 10101010, 10101010 (.666666627v). With the the way I believe it works, if this file was downgraded to an 8 bit file, the first 8 bits of the original audio file would be copied and the binary value in the DAC would be 10101010, 00000000, 00000000 (.6640652v). As you can see, the volume change would be minimal (with lower quality audio). The way the OP is suggesting, the value instead would be 00000000, 00000000, 10101010, or about .00001013v. The volume in this case would be 65,536 times lower than the original 24 bit audio file, with no advantages in terms of processing power or even quality. It just doesn't make sense to me that anyone would choose option 2. Also, sorry if that was confusing, that just makes sense to me from a low-level software standpoint.