Is a high compression lossless codec possible?

Mar 3, 2009 at 4:17 PM Post #16 of 22
Yeah, we reached the point of diminishing returns many years ago.
So from now on I guess we will see improvements in 0.X% and not X%
 
Mar 12, 2009 at 3:05 PM Post #17 of 22
Quote:

Originally Posted by rimrocks /img/forum/go_quote.gif
Wouldn't be holding your breath. Twenty years ago lossless got about 2:1, and that ratio has hardly changed since then.
Fundamentally the question is, 'How different from noise is music?'. You can't compress noise. So as music becomes more tonal, it becomes more easily compressed. Therefore different types or genres of music will have different compression ratios. But overall a 50% reduction seems to be about all we're going to get.



Wouldn't it be very interesting if recording engineers, as part of the mastering of an album, would do a compression profile on their recording and publish recommended compression settings for best audio quality? Instead of one size fits all compression, we could encode our music in recommended modes for each recording and have good audio quality and maximize storage space all at once?

Is it possible, say, to write a program that would examine an incoming uncompressed music signal and could calculate what compression setting the music would be best served at?
 
Mar 12, 2009 at 10:56 PM Post #19 of 22
Quote:

Originally Posted by Jeff Guidry /img/forum/go_quote.gif
Is it possible, say, to write a program that would examine an incoming uncompressed music signal and could calculate what compression setting the music would be best served at?


Great idea. This ought to be very possible - an objective measure of the lossless compression ratio.

Slightly different slant. On the high-def video disks people often wonder how they should compare the efficiency of the two competing lossless audio formats. It's easy - just look at the video.
 
Mar 13, 2009 at 3:40 PM Post #20 of 22
Quote:

Originally Posted by Jeff Guidry /img/forum/go_quote.gif
Is it possible, say, to write a program that would examine an incoming uncompressed music signal and could calculate what compression setting the music would be best served at?


Yeah, multi-pass (2 or 3-pass) encoding sounds like a nice idea.
From what I have read online its already incorporated in lossy encoders (ex. Nero Digital AAC), where the encoder on the first pass analyze the audio data which it then use to distribute the bitrate a bit differently on the second pass. Just like how multi-pass video encoders do.

Unsure if it makes sense on lossless compression though.
popcorn.gif
 
Mar 14, 2009 at 3:41 AM Post #21 of 22
What you are talking about krmathis is not quite what I had in mind...I know that if you set a certain bitrate target, a multi-pass encoding scheme can distribute the bitrate more for complex passages and less for simpler ones to keep a consistent bitrate. What I am talking about is having a program analyze the uncompressed stream and determine what bitrate would allow for the best quality signal for the least amount of data used?

What I am thinking is an analysis that rates how true the compressed signal is to the original combined with how high the bitrate is. For example...on a scale of 1 to 10, a lossless compression might score, say a 7, because it stays perfectly true to the original, but it requires a large filesize. A 128 kbps lossy encode might score the same 7, because though it is much adulterated from the original signal, the reduction in filesize makes up the difference. Say a 192 kbps lossy encode scores a 5, while the 256 kbps lossy encode scores the highest at a 9, making it the ideal compromise between bitrate and filesize.

That way, you would know at a glance how much compression you would need to realize a maximization of your PDAP audio space and ensuring a good quality.
 
Mar 14, 2009 at 9:26 AM Post #22 of 22
What I mentioned were how its done in lossy codecs, which can't be directly applied to lossless codecs. Since it need to be lossless and can't distribute the bits at all.
It may be possible to analyze the data on first pass to determine how much you gain by using a more complex algorithm (ex. FLAC -8 vs. -0), compared to the estimated longer encoding time. Then compress on second pass using the "best" compromise algorithm (file size <-> encoding time).
 

Users who are viewing this thread

Back
Top