Quote:
Originally Posted by scootsit /img/forum/go_quote.gif
[snip]
Questioning whether a device is better or worse is meaningless without a rigorously defined definition of quality. Without a mathematically defined optimum, we cannot define a device is closer or further from optimum. That said, different devices perform differently with respect to different analyses. There is a fruitful discussion in these specific analyses, selecting the appropriate device is ultimately personal.
[snip]
I think it would be useful to eliminate the terms "better" and "worse" from our vocabulary and replace them with "more enjoyable for me" and "less enjoyable for me."
Without an ideal and a definition of quality, there's no saying what's better or worse, as you say. This is an important point to bring up, so thanks for doing that.
In some discussions, including here, some people
have been reasonably careful in establishing the definition of quality. We're taking the "wire with gain" as the ideal, so anything that behaves as y(t) = A x(t) is the optimum, where x(t) is the input which we constrain to be of audio frequencies (and bounded), y(t) is the output of the amplifier, and A is some scaling constant that may or may not be greater than 1. (In practice there may also be some ultrasonic noise and so on too.) In many applications for amplifiers in general, exact accuracy and precision is not that important, but for many others, a "wire with gain" is exactly what you want. Some applications quite arguably require even higher accuracy than audio. It's mathematically the simplest intuitive definition as well. For audio playback, if you want highest fidelity in reproducing whatever signal the source is sending, that's also what you want. This is not necessarily what people prefer to have for their audio playback systems, so not necessarily optimal in that sense, but it's hardly an arbitrary reference point.
With this framework, we can begin to define what "better" and "worse" mean, in the context of "wire with gain" (accurate) signal reproduction. One thing we could examine is the error, y(t) - A x(t). One measure of quality could be something like average mean-squared error over time, given some specific input x(t), with a certain load, at a certain output level. Then we could rank different amplifiers in terms of best to worst.
However, that is a very crude measure and may not well correspond to what sounds most like the ideal. Also, what kind of input x(t) would you use? That's why in practice, people have mostly settled on standardized test signals x(t) to use for comparison purposes, to stress the amps in different ways to try to elicit bad behavior, and more relevant audio metrics such as the noise, THD, frequency response, and so on. As is suggested by theory and by listening results, if an amplifier has good "scores" on a wide variety of lab test inputs, it should do well with just about any allowable input, including music.
Some people seem to be very convinced that standard test data has very limited meaning. I'll grant that it's not easy to come up with some 100% psychoacoustically valid weighing function that takes in all the test results and spits out some kind of final score that corresponds to what people hear. However, that's not particularly the best approach anyway. It may be more instructive to say that one amp is better at X, Y, Z, and worse at J, K, L. But if one amp scores better than another in pretty much every test, by a reasonable margin, it is probably safe to just cut out some details and conclude that one amp is "better" than another, with respect to accurate signal reproduction.
Of course, people could also be talking about output power levels, features, ergonomics, aesthetics, price, and more, when they are being looser with the definition of "better". Some of those attributes are difficult to define; others are not.
Evaluating goodness with respect to individual preferences is obviously more difficult. However, we need not consider personal preferences when evaluating how close amps come to the ideal.