Lossless Compression: exactly what is discarded?
post-830064
Thread Starter
Post #1 of 10

mshan

1000+ Head-Fier
Joined
Mar 12, 2004
Messages
1,470
Reaction score
11
Joined
Mar 12, 2004
Posts
1,470
Likes
11
With lossless compression, exactly what is discarded to achieve the compressed file size?
 
     Share This Post       
post-830071
Post #2 of 10

fewtch

Headphoneus Supremus
Joined
Jul 23, 2003
Messages
9,559
Reaction score
24
Joined
Jul 23, 2003
Posts
9,559
Likes
24
Nothing is discarded, that's why it's called lossless. Rather, "number crunching" and other similar techniques are used to squeeze the file to a smaller size. As a very simple example, a pattern like "10 10 10 10 10 10 10 10" in the file could be represented as "8 10" (eight 10's in a row). Not exactly how it's done, but you get the idea...
 
     Share This Post       
post-830087
Post #3 of 10

mshan

1000+ Head-Fier
Joined
Mar 12, 2004
Messages
1,470
Reaction score
11
Joined
Mar 12, 2004
Posts
1,470
Likes
11
Is this transcoding?
 
     Share This Post       
post-830095
Post #4 of 10

fewtch

Headphoneus Supremus
Joined
Jul 23, 2003
Messages
9,559
Reaction score
24
Joined
Jul 23, 2003
Posts
9,559
Likes
24
No, transcoding is converting from one compressed (encoded) format to another.

Call it whatever you like (lossless compression is an accurate term), it's similar to PKZIP for audio. A .zip of a text file will be much smaller than the original, but can be decompressed again and reproduce the text file exactly as it was originally.
 
     Share This Post       
post-830110
Post #5 of 10

Iron_Dreamer

Member of the Trade: HeadAmp
Landscape-Photo-Fi
Organizer for Can Jam '09
Joined
Mar 28, 2003
Messages
9,527
Reaction score
74
Joined
Mar 28, 2003
Posts
9,527
Likes
74
Redbook audio is stored as a 16-bit value, however unless a sample is the loudest possible (i.e. 16 1's), there is unused dynamic range (or empty space). Lossless encoders, as I understand, try to pack all that empty space into as few bits as possible. This is why a quiet classical piece will compress to a much lower bitrate losslessly than does a highly compressed (and very loud, comparatively speaking) pop or rock recording.
 
     Share This Post       
post-830311
Post #6 of 10

Distroyed

1000+ Head-Fier
Joined
Dec 29, 2003
Messages
1,089
Reaction score
12
Joined
Dec 29, 2003
Posts
1,089
Likes
12
Why, exactly, does Monkey's Audio format have an option for quality? Wouldn't such an encoder be programmed to "rewrite" as described in the most efficient manner possible, making levels of compression meaningless? Or does the higher compression settingings actually lose something that the lower ones maintain?
 
     Share This Post       
post-830341
Post #7 of 10

Iron_Dreamer

Member of the Trade: HeadAmp
Landscape-Photo-Fi
Organizer for Can Jam '09
Joined
Mar 28, 2003
Messages
9,527
Reaction score
74
Joined
Mar 28, 2003
Posts
9,527
Likes
74
The compression settings have to do with how long it takes to encode/decode the files, the longer allowed (i.e. a higher compression level) the more CPU power it will take. Any level of compression is still lossless, the higher settings just do a bit heavier computation to squish the same exact music data into a smaller file size. If you have a 2+GHz cpu like me, then it is all moot, since you use the highest compression level all the time
 
     Share This Post       
post-830343
Post #8 of 10

Jasper994

Organizer for Can Jam '09
Joined
Jun 3, 2003
Messages
6,114
Reaction score
13
Joined
Jun 3, 2003
Posts
6,114
Likes
13
Quote:

Originally Posted by Iron_Dreamer
Redbook audio is stored as a 16-bit value, however unless a sample is the loudest possible (i.e. 16 1's), there is unused dynamic range (or empty space). Lossless encoders, as I understand, try to pack all that empty space into as few bits as possible. This is why a quiet classical piece will compress to a much lower bitrate losslessly than does a highly compressed (and very loud, comparatively speaking) pop or rock recording.


Yeah, no kidding... Britney's In the Zone is HUGE!!!
compared to the file sizes of Tchaikovsky or Rachmaninov...
 
     Share This Post       
post-830496
Post #9 of 10

Glassman

Headphoneus Supremus
Joined
Mar 22, 2003
Messages
1,830
Reaction score
11
Joined
Mar 22, 2003
Posts
1,830
Likes
11
lossless encoders are much more refined.. they, for example, try to predict the next value on the basis of previous samples, then they store just the difference between the predicted and actual sample and if the prediction algorithm is smart enough, most of the differences are very small numbers, thus taking much less space..
 
     Share This Post       
post-830507
Post #10 of 10

Orpheus

Headphoneus Supremus
Joined
Aug 17, 2002
Messages
3,126
Reaction score
15
Joined
Aug 17, 2002
Posts
3,126
Likes
15
Quote:

With lossless compression, exactly what is discarded to achieve the compressed file size?


the simple and complete answer is that "redundancy" is discarded. and that's basically at the heart of any lossless compressor. lossy compression takes that one step further and tries to predict what can be thrown away while making the least perceivable difference. (i wrote my first compression program my sophomore year in high school.)
 
     Share This Post       

Users Who Are Viewing This Thread (Users: 0, Guests: 1)

Top