DSGant
New Head-Fier
- Joined
- Feb 20, 2009
- Posts
- 22
- Likes
- 0
Quote:
320 kbps (probably) isn't lossless; neither is 1411 kbps. The former is a lossy compressed (or a very quiet track in lossless) file, the latter, uncompressed (it's actually 1411.2 kbps). Lossless files vary in size.
Lossy compression schemes do use a psychoacoustic model, altering the format to fit a model of hearing. Some data is lost, particularly on the higher frequencies, which is where the compression artifacts are heard (splash, wah, hollowness).
Lossless compression schemes do not use psychoacoustics. All frequencies are preserved. I don't want to go into, nor do I have a complete understanding of it (I'm a musician, not a mathematician), but basically, quieter sections and stretches of silence do not require as much data space as they are given in uncompressed formats. For example, six seconds of silence can be stored as a fraction of a second of silence, and then a single numerical value for the length of time. And quieter sections have a smaller range of amplitude, and can therefore fit into a smaller data space.
Originally Posted by Head_case /img/forum/go_quote.gif It doesn't make sense to me, how something can, be lossless at 320kbps, and lossless at 1411kbps, when both are taken from a digital source. That must mean that the sub and supra-frequencies are rendered redundant, as if humans can't hear or appreciate beyond 20-20K or something. |
320 kbps (probably) isn't lossless; neither is 1411 kbps. The former is a lossy compressed (or a very quiet track in lossless) file, the latter, uncompressed (it's actually 1411.2 kbps). Lossless files vary in size.
Lossy compression schemes do use a psychoacoustic model, altering the format to fit a model of hearing. Some data is lost, particularly on the higher frequencies, which is where the compression artifacts are heard (splash, wah, hollowness).
Lossless compression schemes do not use psychoacoustics. All frequencies are preserved. I don't want to go into, nor do I have a complete understanding of it (I'm a musician, not a mathematician), but basically, quieter sections and stretches of silence do not require as much data space as they are given in uncompressed formats. For example, six seconds of silence can be stored as a fraction of a second of silence, and then a single numerical value for the length of time. And quieter sections have a smaller range of amplitude, and can therefore fit into a smaller data space.