It doesn't make sense to me, how something can, be lossless at 320kbps, and lossless at 1411kbps, when both are taken from a digital source. That must mean that the sub and supra-frequencies are rendered redundant, as if humans can't hear or appreciate beyond 20-20K or something.
Lossy compression schemes do use a psychoacoustic model, altering the format to fit a model of hearing. Some data is lost, particularly on the higher frequencies, which is where the compression artifacts are heard (splash, wah, hollowness).
Lossless compression schemes do not use psychoacoustics. All frequencies are preserved. I don't want to go into, nor do I have a complete understanding of it (I'm a musician, not a mathematician), but basically, quieter sections and stretches of silence do not require as much data space as they are given in uncompressed formats. For example, six seconds of silence can be stored as a fraction of a second of silence, and then a single numerical value for the length of time. And quieter sections have a smaller range of amplitude, and can therefore fit into a smaller data space.