Separate names with a comma.
I've auditioned these. I loved them. Accurate, powerful but they would have overpowered my room.
I don't understand your problem......
That wasn't my point, the point I was making to Prot was that un-amplified music doesn't have the added "glare" introduced by "hot" sounding equipment. Such as the ODAC or O2.
You can't hear unamplified music. Its all in your head bro.
That's like saying dual overhead camshafts are better than pushrods and rocker arms. It really doesn't matter by the time of final implementation.
I mean if you compare to the silicon die space on the ES9018S chip vs the silicon die space on FPGA. You cannot run too complex filters on the ES9018(because of limited die space and power/heat limitations) while you can really do alot more complex stuff with FPGA. This is of course not taking into account how good the filter design is.
Even my 16 bit 96kHz CS4328 build, and developed, dac can outperform MANY nowadays dac's, still (1991)
Please, don't forget listening instead of comparing numbers,,,,,
CS4328 is 18bits, not 16bit
After meaurements our dac, (which i designed and build with a friend of mine), it had a resolution of 16,4 bits, so, chip can be 18 bits, but final resolution is always lower.
Listen to the music.
I get that but I've seen them described in many ways, so it appears to be all relative. Not everyone hears this glare. So my point is that to get the bottom of this, the whole chain needs to be evaluated.
I've seen folks listening to the HD800, DT880 and the like with that stuff and then be surprised (or not) that it sounds "hot", when those headphones measure with treble peaks and do sound like it. And I'm not knocking them for that, you and I have the HE-560, that some complain mostly about its lower treble peak. Headphones and recordings are known to vary more wildly in practice (audibly and measurably) than electronics. Can the electronics really be at fault if signs point towards the headphone or the recording, can it realistically just be the amp and dac's effect on the signal that's most audible after it's converted to sound by the headphone?
There are at least a few people in this world who are "pitch perfect"; these souls can tune a piano by ear, and would almost certainly notice if a CD (or a record) were playing 0.1% fast. However, I'm not one of them, and I'm pretty sure I wouldn't notice such a speed error at all. However, it's still true that a record or CD should play at the correct speed, and that failing to do so is an error. Therefore, I can't make claims like "it doesn't matter" or "it's inaudible". (I can reasonably say that I can't hear it, or that xx% of the population can't hear it, or, if I was in the marketing department of the product being discussed, I could even say "not enough of our customers can hear it that it's worth us fixing it", but none of that even suggests that it doesn't exist, or that it it "totally inaudible".)
And we can leave the argument about whether I'm lucky - because I could buy a cheap piano and never notice the difference; or whether I'm deprived - because I'll never be able to experience the true joy of a perfectly tuned piano; for philosophy debates.
There would actually be two possible ways:
1) As you suggested, it's theoretically possible that the DAC could "know about" and "compensate for" specific errors that were caused by other components when the audio signal was encoded. While this sounds like a great idea, it usually falls down in practice - first, because some types of errors simply cannot be corrected perfectly, and second, because being able to do so relies on knowing a lot more about the original signal chain than we usually do.
2) It's perfectly reasonable to claim that simply avoiding causing any additional errors contributes to creating a more perfect reproduction of the original.
There's a sort of "option 1a" that entails making good guesses about problems, and then making alterations based on the assumption that they are present, and hoping that the end result is closer to the original than what you started with. A perfect example of this is the software used to "recover missing detail" from pictures. If you have a picture, taken with a telescope, which shows a bunch of blurry little white blobs, and some short parallel white lines, since you know that what you expected to see were a bunch of tiny white points, you can assume that the blurry dots were supposed to be stars but they are a bit out of focus, and that the short white lines were created when the telescope failed to remain still and so smeared similar dots in a single direction. You can then calculate a mathematical correction that will get you remarkably close to what was there to begin with. However, this all relies on the assumption that you're looking at a picture of stars.
You can use that same or similar software to "sharpen" a picture of something else, such as a human face, or a license plate number. You can even base some of your assumptions on the way in which pictures tend to get blurred when a camera isn't perfectly focused. This will give you a "pretty good guess" that sometimes produces remarkably good results. However, it also sometimes produces bad results, because your assumptions aren't always true. (Modern software can even be written such that, assuming you are hoping to make a license plate number readable, the software can "detect how well it worked", and even adjust its operating parameters accordingly. This would allow it to try different settings, and finally use the one that produced a result that was closer to what it expected or "hoped for". However, in reality, it's still a guess.) To take the extreme example, if I was the photographer, and I DELIBERATELY shifted the picture out of focus, then your assumption that it should be sharp is wrong, and, even if you could do so perfectly, making it sharp will "destroy" it.
Personally, I would leave anything that deliberately alters the signal in the "mastering process". (If I was remastering a CD, and I happened to know that it was converted with a specific brand of A/D converter, and also had a way to correct the specific errors introduced by that encoder, then I would so so.... although, even then, if my correction process generates other new errors, I have to decide whether my new version is really "better" or not.)
(This question comes up frequently in legal cases. If I start with a fuzzy blob that's supposed to be the bad guy's face on a security video, with enough signal processing I can probably "sharpen" it to the point where it looks like a human face. However, can I trust it to look like the right face? Or did my software do such a great job that it essentially created a face from insufficient information, in which case who it happens to look like is almost purely random? Or, even worse, does it offer so many options that, if I keep trying different settings, I can produce a result that looks like whomever I want it to - at which point I cheerfully declare "that's the guy" and stop trying new options?)
I'm afraid I would have to take what you said even further. How we experience actual experiences is modified by our expectation bias. In other words, for most people, what they hear when they actually get those speakers home will depend, at least in part, on what they expect to hear - which will, in turn, depend partly on what they read, and how much this actually occurs will depend on their individual personality, and where they heard that information. (Just like we have a certain tendency to believe it when "the ordinary guy" on the TV commercial tells us that a certain headache tablet "really works", and are influenced to a different degree when "a real doctor" appears on the screen. In fact, as scary as it is, tests have shown conclusively that we're more likely to believe the guy who is dressed like a doctor - even if we know him as an actor who plays a doctor on a TV show. However, it goes even further, because not only are we more likely to believe what he says when he's pushing that product on TV, but we are more likely to imagine that it actually works better when we try it - because how we perceive it depends on our expectations.
In audio terms this means that, if the manufacturer provides a "nice scientific sounding explanation" for why their product should sound better, we are more likely to buy it because we trust them and "their explanation makes sense", and we are more likely to actually find that we like it better after we buy it because we expect it to work well. (There's also another rationalization mechanism that is pretty well known in humans - that we hate to be wrong. This means that, statistically, you are much likelier to find that you like the way an expensive piece of gear sounds after paying for it, because the alternative is to admit to yourself and your friends that you made a mistake.) Now, none of this will convince most people to buy a really poor product, or to keep one once they listen to it and find that it's clearly worse, but it most definitely will bias you towards hearing a difference where none is present, or to exaggerate the importance of a real but insignificant difference.
Of course, expectation bias works both ways - and will bias you towards not hearing a difference if you start out being convinced that none exists. (However, with very few exceptions, most people aren't going to order a product which they actually expect to be no better than the one they currently have, which is why many manufacturers - even those who make silly snake oil products - are comfortable offering "money back guarantees" and "return periods". Once you are "convinced enough to try it", you already have a significant expectation that it will be better - otherwise you will have wasted the effort involved in ordering and testing it. Statistically, very few people will bother to order a new product just to confirm for themselves that it's no better than the one they have.)
As for that PS..... a "perfectly linear-to-20kHz component is going to sound "flat and linear" - terms like "harsh" are simply an interpretation of that result.
(I often hear the term "analytical" used to describe components in a negative way - when it really means "accurate" - which I seem to recall being the original definition of "high fidelity".)
If your recording sounds "harsh" when it is reproduced accurately, then perhaps the recording simply really does sound harsh.
(And, perhaps, some components that don't sound harsh with that recording are ALTERING IT by failing to reproduce whatever makes it sound harsh.)
I can think of a few reasons why a recording might sound harsh:
1) Some early A/D converters had poor quality band-limiting filters which produced non-flat frequency response, phase aberrations, and possibly even distortion - especially at high frequencies. SO perhaps some early digital recordings simply sound bad for "technical reasons".
2) Most modern multi-track recordings don't replicate the experience of actually being there live at all anyway. Specifically, high frequencies are attenuated by travel through air, which means that what a cymbal sounds like, even in the front row, is a lot different than what it sounds like one foot directly over the top of the drum set - which is probably where the microphone used to record it was placed. It's only reasonable to expect what the microphone records to sound like what you heard when it was located at the same general position as your ears. (If you've ever made actual live recordings, then you also know that, even if you put the microphone six inches in front of your nose, and don't do any processing at all, it's still difficult to get your recording to sound even close to what you heard. If the microphone is two feet in front of you, it's even more difficult. And a lot of the effort expended by mastering engineers is in the direction of "getting it back to where it belongs". Therefore, it is foolish to assume that the recording sounds exactly like the performance itself - even not counting your playback equipment.)
3) Cymbals are really loud - especially close up. This means that they tend to overload microphones, which may cause them to sound odd. It also means that they often have to be compressed, limited, and otherwise processed during mastering - which are all good reasons why they might sound "odd". Ditto for drums, which produce VERY powerful transients, which are, again, likely to overload a microphone or preamp, and are also likely to be pretty aggressively compressed and limited in the mix.
4) Finally, a lot of how we think about and describe things in general is based on previous experience. Perhaps what some people are describing as "harsh" is simply "correct", but most of us are so used to "rolled off and cleaned up" that we simply aren't used to accurate - and so it seems harsh". (Cymbals can sound mellow when someone is tapping them with a wire brush, but the last time I heard someone actually whack a cymbal, in a small club, without a rag or a damper of some sort on it, it was really loud, and pretty darned harsh.)
OK, understood, you are correct on the numerous variables present and the very subjective ways we all interpret them. I have never had a treble problem with the 560, others find it very annoying.
My problem with the O2/ODAC is also subjective, tonality is critical to me, they just didn't sound natural with multiple headphones. Others love the Audeze "House Sound," I don't. While I love the 560 and HE6, many of those who like Audeze don't care for them.
What I try to do on Head Fi and in reading the Audio Literature is find those with similar tastes, biases, and objectives, then learn vicariously through their experiences.