I feel like they can be helpful, especially in terms of sine wave / square wav graphs telling you how technically competent something might sound or how two headphones might compare. But even then I've found looking at a whole frequency response graph tells me very little; comparing the 7550 to the EX1000 on graphs alone I would not have guessed that the 7550 would be soooo much more to my liking. Looking /
I love statistics, and I'm fine with looking at measurements, but the similarity of the EX700, EX600, EX800ST, EX1000 on paper, versus how different they sound in real life was perhaps at the core of my dismay.
Then followed with novelites like, the square-wave response of EX800ST and Shure SE535 is pretty much identical, the Qualia 010 has the 'worst' square-wave response ever, et cetera.
If looking at measurements blind tell me I should expect a Skullcandy Full Metal Jacket, and I'm presented with the sound of a Qualia 010, then something is very seriously missing in their, you know... crystallization of perceptible parameters?, yet very few accede this point, at all.
Another annoying feature is that there's a common vast simplification of what the data 'should' look like, it's like there's a magical index of sound relative to performance no one knows about.
For example, like you pointed out earlier with the break-up frequencies and distortion, electrostatics inherently have less of all that, is that correct?, they have higher faithfulness to the actual recording venue. In which case, we (and manufacturers) may as well cease and desist with the inherently flawed dynamic drivers.
Statistically, i.e. from a consumer history and product success perspective there's a huge, unaccounted for deviance there.
(i.e. relative to the Unicorn book of measurements versus performance index all the quantative data specialists, EE's and objectivists refer to).
Edited by kiteki - 12/10/12 at 6:00pm