You're missing the point. Using the definition of "taking advantage of limitations of human perception" covers everything! Music takes advantage of the limitations of human perception;
No. I shouldn’t have abbreviated the quote. Perceptual Coding is lossy compression that takes advantage of limitations in human perception. In perceptual coding, audio data is selectively removed (compressed) based on how unlikely it is that a listener will notice the removal.
Music is not lossy compression.
In practice, the term "perceptual coding" specifically means the use of a perceptual model of frequency and timing masking (auditory masking) used as a reference for removing data. It's therefore a term which only applies to certain lossy audio codecs rather than being a meaningless term because it applies to everything.
That is true for modern codecs, but it is not a general rule/definition.
1. Maybe the difficulty you were/are having is that you were trying to separate the issue into two different "domains" (amplitude and frequency)? While it's sometimes useful to do this for the sake of "vizualization" in reality they are not separate/different domains, they're exactly the same thing. A sine wave (for example) is effectively defined as: An increasing amplitude until a "peak" is reached, then a decreasing amplitude until the "trough" is reached and then an increasing amplitude again until the starting point is reached. We call this a "cycle" and frequency is simply the number of cycles per second. In other words, Frequency = Amplitude (over time).
In your exchange with ILoveMusic, there are several ideas that are unclear, misleading, garbled or incorrect. Not all of that from you(!) in the above quote(!!), but rather in the entire exchange.
If you’re so inclined, perhaps you can comment on the following facts:
1. Time domain (amplitude vs. time) and frequency domain (amplitude or power vs. frequency) are indeed
different domains. The
same information is represented in
different forms. An example of the difference is it allows one to use a point-by-point product in one domain instead of convolution in the other. The analog signal from a microphone, the signal on an analog interconnect, the output of a DAC or the signal on the speaker wire from an analog amplifier are all voltage amplitude vs. time (time domain). Frequency information is not available unless one transforms the signal, using a spectrum analyzer for analog data or a Fourier transform for digital data. The fact that the unit of frequency is Hertz, equivalent to cycles/second, and the word second implies you have time is meaningless.
2. 16 bit in not perfect. If I have an original signal, convert it to 16 bit resolution, and use that to create a reproduced signal, the original and reproduced will not be identical. That is, subtracting the two does not give all zeroes. If the intention is to say that the imperfection is not audible, that is different from saying it is perfect. Yes, I know BigShot is itching to say “in the context of this forum, inaudible is perfect”, but “16bit is already perfect there is no "better" than perfect” is misleading.
3. 1 bit delta-sigma coding (used in DSD and SACD) is not the same as 1 bit LPCM coding. Usually, talking about 16 bits implies linear pulse code modulation.
4. Shannon-Nyquist tells us we need to sample at
greater than twice the highest frequency of interest,
not greater than or equal to twice. Twice the highest frequency is inadequate.