Quote:
Originally Posted by tuoppi /img/forum/go_quote.gif
Simply put your computer doesn't send the volume information separately. For example when the music is 16-bit, only the 16-bits are sent out. At 100% volume the music fills the 16-bits like it normally does, there is no change. When you lower the volume the music is sent out with 16-bits, but it does not use all 16-bits (or rather all the combinations of bits available), it uses less and the music had to have been changed.
We could simplify this to make it easier to understand. Let's assume that your music can have 8 different levels. Now at 100% volume the music can use any numbers between 1 and 8. But when you lower the volume your computer tells the music not to use 7 or 8, and the music is changed so that it fits between 1 and 6. In the latter case 8 levels are still sent out, but 7 and 8 are not used.
|
it takes a lot of signal to completely fill the 16bit architecture.. and once the 16bits is full, the resulting sound would be stupid and extremely fatiguing.
(picture four songs playing at once through the same speakers)
levels 9-20 would be 24bit (just to add to what was said in the quote above)
there are four different ways to go to start.
1. the highest allowed decibel
2. the lowest allowed decibel (-140dB)
3. the lowest frequency allowed
4. the highest frequency allowed
the above four help dictate what bit-depth is used.
now the xfi 24bit crystalizer does an amazing thing when used properly.
say for-instance you have a 16bit song that has some peaks that fall out of the 16bit range (would ruin the entire track with processing to fix)
nowadays producers can take those peaks and pull them down until they reach a decibel level that falls within the 16bit depth.
then the 24bit crystalizer takes those 'squished' peaks and puts them right back where they were before audible output.
it is really a compression and de-compression method for 24bit sound.
but thinking logically - 24bit crystalizer is only for 16bit audio that has some peaks into the next bit depth.
-- eventually the 16bit track will fill up and there will be no room for any data, and this is with all sounds that fell into the next bit-depth category ( + the other data that resides in the 16bit depth) --
that is why you cant take a 24bit track and compress it into 16bit and then unpack it to 24bit again.
there isnt enough room for the full 24bit track.
think of it like 16bit music 'over-clocked'
decibel peaks in both directions are easy.. but adding frequency response is much much more complex (approaching impossible)
very pathetic 24bit audio is possible with a 16bit depth transfer.
(like only 20% of what 24bit audio is capable of)
but once music starts going to 24bit as normal.. there will be peaks into the next bit depth and the whole process will be reproduced again.