Lossless Compression: exactly what is discarded?
Jun 5, 2004 at 11:48 PM Thread Starter Post #1 of 10

mshan

1000+ Head-Fier
Joined
Mar 12, 2004
Posts
1,470
Likes
11
With lossless compression, exactly what is discarded to achieve the compressed file size?
 
Jun 5, 2004 at 11:59 PM Post #2 of 10

fewtch

Headphoneus Supremus
Joined
Jul 23, 2003
Posts
9,559
Likes
33
Nothing is discarded, that's why it's called lossless. Rather, "number crunching" and other similar techniques are used to squeeze the file to a smaller size. As a very simple example, a pattern like "10 10 10 10 10 10 10 10" in the file could be represented as "8 10" (eight 10's in a row). Not exactly how it's done, but you get the idea...
 
Jun 6, 2004 at 12:22 AM Post #3 of 10

mshan

1000+ Head-Fier
Joined
Mar 12, 2004
Posts
1,470
Likes
11
Is this transcoding?
 
Jun 6, 2004 at 12:29 AM Post #4 of 10

fewtch

Headphoneus Supremus
Joined
Jul 23, 2003
Posts
9,559
Likes
33
No, transcoding is converting from one compressed (encoded) format to another.

Call it whatever you like (lossless compression is an accurate term), it's similar to PKZIP for audio. A .zip of a text file will be much smaller than the original, but can be decompressed again and reproduce the text file exactly as it was originally.
 
Jun 6, 2004 at 12:44 AM Post #5 of 10
Joined
Mar 28, 2003
Posts
9,530
Likes
81
Location
Los Angeles
Redbook audio is stored as a 16-bit value, however unless a sample is the loudest possible (i.e. 16 1's), there is unused dynamic range (or empty space). Lossless encoders, as I understand, try to pack all that empty space into as few bits as possible. This is why a quiet classical piece will compress to a much lower bitrate losslessly than does a highly compressed (and very loud, comparatively speaking) pop or rock recording.
 
Jun 6, 2004 at 3:49 AM Post #6 of 10

Distroyed

1000+ Head-Fier
Joined
Dec 29, 2003
Posts
1,089
Likes
12
Why, exactly, does Monkey's Audio format have an option for quality? Wouldn't such an encoder be programmed to "rewrite" as described in the most efficient manner possible, making levels of compression meaningless? Or does the higher compression settingings actually lose something that the lower ones maintain?
 
Jun 6, 2004 at 4:13 AM Post #7 of 10
Joined
Mar 28, 2003
Posts
9,530
Likes
81
Location
Los Angeles
The compression settings have to do with how long it takes to encode/decode the files, the longer allowed (i.e. a higher compression level) the more CPU power it will take. Any level of compression is still lossless, the higher settings just do a bit heavier computation to squish the same exact music data into a smaller file size. If you have a 2+GHz cpu like me, then it is all moot, since you use the highest compression level all the time
biggrin.gif
 
Jun 6, 2004 at 4:14 AM Post #8 of 10

Jasper994

Organizer for Can Jam '09
Joined
Jun 3, 2003
Posts
6,114
Likes
13
Quote:

Originally Posted by Iron_Dreamer
Redbook audio is stored as a 16-bit value, however unless a sample is the loudest possible (i.e. 16 1's), there is unused dynamic range (or empty space). Lossless encoders, as I understand, try to pack all that empty space into as few bits as possible. This is why a quiet classical piece will compress to a much lower bitrate losslessly than does a highly compressed (and very loud, comparatively speaking) pop or rock recording.


Yeah, no kidding... Britney's In the Zone is HUGE!!!
evil_smiley.gif
compared to the file sizes of Tchaikovsky or Rachmaninov...
 
Jun 6, 2004 at 7:53 AM Post #9 of 10

Glassman

Headphoneus Supremus
Joined
Mar 22, 2003
Posts
1,830
Likes
11
lossless encoders are much more refined.. they, for example, try to predict the next value on the basis of previous samples, then they store just the difference between the predicted and actual sample and if the prediction algorithm is smart enough, most of the differences are very small numbers, thus taking much less space..
 
Jun 6, 2004 at 8:07 AM Post #10 of 10

Orpheus

Headphoneus Supremus
Joined
Aug 17, 2002
Posts
3,126
Likes
20
Quote:

With lossless compression, exactly what is discarded to achieve the compressed file size?


the simple and complete answer is that "redundancy" is discarded. and that's basically at the heart of any lossless compressor. lossy compression takes that one step further and tries to predict what can be thrown away while making the least perceivable difference. (i wrote my first compression program my sophomore year in high school.)
 

Users who are viewing this thread

Top