Lossless Compression: exactly what is discarded?
Jun 5, 2004 at 11:59 PM Post #2 of 10
Nothing is discarded, that's why it's called lossless. Rather, "number crunching" and other similar techniques are used to squeeze the file to a smaller size. As a very simple example, a pattern like "10 10 10 10 10 10 10 10" in the file could be represented as "8 10" (eight 10's in a row). Not exactly how it's done, but you get the idea...
 
Jun 6, 2004 at 12:29 AM Post #4 of 10
No, transcoding is converting from one compressed (encoded) format to another.

Call it whatever you like (lossless compression is an accurate term), it's similar to PKZIP for audio. A .zip of a text file will be much smaller than the original, but can be decompressed again and reproduce the text file exactly as it was originally.
 
Jun 6, 2004 at 12:44 AM Post #5 of 10
Redbook audio is stored as a 16-bit value, however unless a sample is the loudest possible (i.e. 16 1's), there is unused dynamic range (or empty space). Lossless encoders, as I understand, try to pack all that empty space into as few bits as possible. This is why a quiet classical piece will compress to a much lower bitrate losslessly than does a highly compressed (and very loud, comparatively speaking) pop or rock recording.
 
Jun 6, 2004 at 3:49 AM Post #6 of 10
Why, exactly, does Monkey's Audio format have an option for quality? Wouldn't such an encoder be programmed to "rewrite" as described in the most efficient manner possible, making levels of compression meaningless? Or does the higher compression settingings actually lose something that the lower ones maintain?
 
Jun 6, 2004 at 4:13 AM Post #7 of 10
The compression settings have to do with how long it takes to encode/decode the files, the longer allowed (i.e. a higher compression level) the more CPU power it will take. Any level of compression is still lossless, the higher settings just do a bit heavier computation to squish the same exact music data into a smaller file size. If you have a 2+GHz cpu like me, then it is all moot, since you use the highest compression level all the time
biggrin.gif
 
Jun 6, 2004 at 4:14 AM Post #8 of 10
Quote:

Originally Posted by Iron_Dreamer
Redbook audio is stored as a 16-bit value, however unless a sample is the loudest possible (i.e. 16 1's), there is unused dynamic range (or empty space). Lossless encoders, as I understand, try to pack all that empty space into as few bits as possible. This is why a quiet classical piece will compress to a much lower bitrate losslessly than does a highly compressed (and very loud, comparatively speaking) pop or rock recording.


Yeah, no kidding... Britney's In the Zone is HUGE!!!
evil_smiley.gif
compared to the file sizes of Tchaikovsky or Rachmaninov...
 
Jun 6, 2004 at 7:53 AM Post #9 of 10
lossless encoders are much more refined.. they, for example, try to predict the next value on the basis of previous samples, then they store just the difference between the predicted and actual sample and if the prediction algorithm is smart enough, most of the differences are very small numbers, thus taking much less space..
 
Jun 6, 2004 at 8:07 AM Post #10 of 10
Quote:

With lossless compression, exactly what is discarded to achieve the compressed file size?


the simple and complete answer is that "redundancy" is discarded. and that's basically at the heart of any lossless compressor. lossy compression takes that one step further and tries to predict what can be thrown away while making the least perceivable difference. (i wrote my first compression program my sophomore year in high school.)
 

Users who are viewing this thread

Back
Top