Thoughts on a bunch of DACs (and why delta-sigma kinda sucks, just to get you to think about stuff)
Jun 23, 2015 at 11:07 PM Post #6,001 of 6,500
  Seriously. Check out Legacy audio.. they have some serious $$$$ speakers I have heard once in a music room at a high end stereo shop. 
 
http://legacyaudio.com/products/dsp-solutions/


I've auditioned these.  I loved them.  Accurate, powerful but they would have overpowered my room.
 
Jun 24, 2015 at 10:53 AM Post #6,002 of 6,500
Quote:artur9
 Accurate, powerful but they would have overpowered my room.

I don't understand your problem......     
biggrin.gif

 
Jun 24, 2015 at 12:16 PM Post #6,003 of 6,500
If the recording is at fault or the headphone is harsh (which is demonstrably more plausible) , why should the dac and amp roll off harshness in them? I mean they could but does that mean that they're superior because they roll off everything and should every electronic strive for this? Where this harshness is coming from should be evaluated before coming to the conclusion that the dac or amp is at fault. And then you're in the realm of perception with all of these terms, so it's all relative.


That wasn't my point, the point I was making to Prot was that un-amplified music doesn't have the added "glare" introduced by "hot" sounding equipment. Such as the ODAC or O2.
 
Jun 24, 2015 at 5:58 PM Post #6,005 of 6,500
   
The advantage of Hugo over other sigma delta dac is that it has a dedicated FPGA which has more processing horsepower which means more complex digital filters can be implemented.

 
That's like saying dual overhead camshafts are better than pushrods and rocker arms. It really doesn't matter by the time of final implementation.
 
Jun 24, 2015 at 6:21 PM Post #6,006 of 6,500
 
   
The advantage of Hugo over other sigma delta dac is that it has a dedicated FPGA which has more processing horsepower which means more complex digital filters can be implemented.

 
That's like saying dual overhead camshafts are better than pushrods and rocker arms. It really doesn't matter by the time of final implementation.

 
I mean if you compare to the silicon die space on the ES9018S chip vs the silicon die space on FPGA. You cannot run too complex filters on the ES9018(because of limited die space and power/heat limitations) while you can really do alot more complex stuff with FPGA. This is of course not taking into account how good the filter design is.
 
Jun 24, 2015 at 7:24 PM Post #6,007 of 6,500
Even my 16 bit 96kHz CS4328 build, and developed, dac can outperform MANY nowadays dac's, still (1991)

Please, don't forget listening instead of comparing numbers,,,,,

Cheers ,

Alex
 
Jun 24, 2015 at 7:55 PM Post #6,009 of 6,500
After meaurements our dac, (which i designed and build with a friend of mine), it had a resolution of 16,4 bits, so, chip can be 18 bits, but final resolution is always lower.



Listen to the music.
 
Jun 25, 2015 at 12:11 AM Post #6,010 of 6,500
That wasn't my point, the point I was making to Prot was that un-amplified music doesn't have the added "glare" introduced by "hot" sounding equipment. Such as the ODAC or O2.

 
I get that but I've seen them described in many ways, so it appears to be all relative. Not everyone hears this glare. So my point is that to get the bottom of this, the whole chain needs to be evaluated. 
 
I've seen folks listening to the HD800, DT880 and the like with that stuff and then be surprised (or not) that it sounds "hot", when those headphones measure with treble peaks and do sound like it. And I'm not knocking them for that, you and I have the HE-560, that some complain mostly about its lower treble peak. Headphones and recordings are known to vary more wildly in practice (audibly and measurably) than electronics. Can the electronics really be at fault if signs point towards the headphone or the recording, can it realistically just be the amp and dac's effect on the signal that's most audible after it's converted to sound by the headphone?
 
Jun 25, 2015 at 9:39 AM Post #6,011 of 6,500
If you have $10k invested and cant hear the difference between gear, thats a personal problem. I cant imagine what would satisfy you in this argument other than coming to our houses and watching us pass an abx test. Why dont you stop worrying so much, sell your gear, and maybe take some cooking classes or somethint.

 
There are at least a few people in this world who are "pitch perfect"; these souls can tune a piano by ear, and would almost certainly notice if a CD (or a record) were playing 0.1% fast. However, I'm not one of them, and I'm pretty sure I wouldn't notice such a speed error at all. However, it's still true that a record or CD should play at the correct speed, and that failing to do so is an error. Therefore, I can't make claims like "it doesn't matter" or "it's inaudible". (I can reasonably say that I can't hear it, or that xx% of the population can't hear it, or, if I was in the marketing department of the product being discussed, I could even say "not enough of our customers can hear it that it's worth us fixing it", but none of that even suggests that it doesn't exist, or that it it "totally inaudible".)
 
And we can leave the argument about whether I'm lucky - because I could buy a cheap piano and never notice the difference; or whether I'm deprived - because I'll never be able to experience the true joy of a perfectly tuned piano; for philosophy debates.
 
Jun 25, 2015 at 10:04 AM Post #6,012 of 6,500
   
Looks like Rob Watts is claiming that his WTA Filter is better than filters that preserve the original data(aka Schiit Closed form filter)?
 
And this:
 
How does a DAC know what else to reproduce other than the data it is given(e.g. garbage in, garbage out)??? Unless Rob Watts have some kind of method/maths which compensates for analogue to digital converter's signal loss? This sounds like MQA type of solution.
 
http://www.audiostream.com/content/mqa-ltd
 

 
There would actually be two possible ways:
 
1) As you suggested, it's theoretically possible that the DAC could "know about" and "compensate for" specific errors that were caused by other components when the audio signal was encoded. While this sounds like a great idea, it usually falls down in practice - first, because some types of errors simply cannot be corrected perfectly, and second, because being able to do so relies on knowing a lot more about the original signal chain than we usually do. 
 
2) It's perfectly reasonable to claim that simply avoiding causing any additional errors contributes to creating a more perfect reproduction of the original.
 
There's a sort of "option 1a" that entails making good guesses about problems, and then making alterations based on the assumption that they are present, and hoping that the end result is closer to the original than what you started with. A perfect example of this is the software used to "recover missing detail" from pictures. If you have a picture, taken with a telescope, which shows a bunch of blurry little white blobs, and some short parallel white lines, since you know that what you expected to see were a bunch of tiny white points, you can assume that the blurry dots were supposed to be stars but they are a bit out of focus, and that the short white lines were created when the telescope failed to remain still and so smeared similar dots in a single direction. You can then calculate a mathematical correction that will get you remarkably close to what was there to begin with. However, this all relies on the assumption that you're looking at a picture of stars.
 
You can use that same or similar software to "sharpen" a picture of something else, such as a human face, or a license plate number. You can even base some of your assumptions on the way in which pictures tend to get blurred when a camera isn't perfectly focused. This will give you a "pretty good guess" that sometimes produces remarkably good results. However, it also sometimes produces bad results, because your assumptions aren't always true. (Modern software can even be written such that, assuming you are hoping to make a license plate number readable, the software can "detect how well it worked", and even adjust its operating parameters accordingly. This would allow it to try different settings, and finally use the one that produced a result that was closer to what it expected or "hoped for". However, in reality, it's still a guess.) To take the extreme example, if I was the photographer, and I DELIBERATELY shifted the picture out of focus, then your assumption that it should be sharp is wrong, and, even if you could do so perfectly, making it sharp will "destroy" it.
 
Personally, I would leave anything that deliberately alters the signal in the "mastering process". (If I was remastering a CD, and I happened to know that it was converted with a specific brand of A/D converter, and also had a way to correct the specific errors introduced by that encoder, then I would so so.... although, even then, if my correction process generates other new errors, I have to decide whether my new version is really "better" or not.)
 
(This question comes up frequently in legal cases. If I start with a fuzzy blob that's supposed to be the bad guy's face on a security video, with enough signal processing I can probably "sharpen" it to the point where it looks like a human face. However, can I trust it to look like the right face? Or did my software do such a great job that it essentially created a face from insufficient information, in which case who it happens to look like is almost purely random? Or, even worse, does it offer so many options that, if I keep trying different settings, I can produce a result that looks like whomever I want it to - at which point I cheerfully declare "that's the guy" and stop trying new options?)
 
Jun 25, 2015 at 10:24 AM Post #6,013 of 6,500
   
Different situation there I think. They have chosen to take a chance. What they think and feel about the product isn't going to change because of that once the speakers arrive in their home (or any other product for that matter). What I'm talking about, while not entirely unrelated, is the manipulation of beliefs through the presentation of information, which fools people into thinking they know everything that there is about a subject, when clearly they do not. For example, the idea that all "competently made" DACs sound the same because they all measure flat from 20 Hz to 20 kHz and other gross over-generalisations, or that the THD+N @1 kHz figure written on the box has any meaning, which goes back to those computer sound cards that have factory measurements that far exceed how they perform in most computers.
 
Edit: My wording is confusing. What I meant to say in the first sentence is, how a person feels about product after they've actually listened with it wont change because of the marketing or hype. Ie: if they like it or hate it, the marketing blurb wont change that.

 
I'm afraid I would have to take what you said even further. How we experience actual experiences is modified by our expectation bias. In other words, for most people, what they hear when they actually get those speakers home will depend, at least in part, on what they expect to hear - which will, in turn, depend partly on what they read, and how much this actually occurs will depend on their individual personality, and where they heard that information. (Just like we have a certain tendency to believe it when "the ordinary guy" on the TV commercial tells us that a certain headache tablet "really works", and are influenced to a different degree when "a real doctor" appears on the screen. In fact, as scary as it is, tests have shown conclusively that we're more likely to believe the guy who is dressed like a doctor - even if we know him as an actor who plays a doctor on a TV show. However, it goes even further, because not only are we more likely to believe what he says when he's pushing that product on TV, but we are more likely to imagine that it actually works better when we try it - because how we perceive it depends on our expectations.
 
In audio terms this means that, if the manufacturer provides a "nice scientific sounding explanation" for why their product should sound better, we are more likely to buy it because we trust them  and "their explanation makes sense", and we are more likely to actually find that we like it better after we buy it because we expect it to work well. (There's also another rationalization mechanism that is pretty well known in humans - that we hate to be wrong. This means that, statistically, you are much likelier to find that you like the way an expensive piece of gear sounds after paying for it, because the alternative is to admit to yourself and your friends that you made a mistake.) Now, none of this will convince most people to buy a really poor product, or to keep one once they listen to it and find that it's clearly worse, but it most definitely will bias you towards hearing a difference where none is present, or to exaggerate the importance of a real but insignificant difference. 
 
Of course, expectation bias works both ways - and will bias you towards not hearing a difference if you start out being convinced that none exists. (However, with very few exceptions, most people aren't going to order a product which they actually expect to be no better than the one they currently have, which is why many manufacturers - even those who make silly snake oil products - are comfortable offering "money back guarantees" and "return periods". Once you are "convinced enough to try it", you already have a significant expectation that it will be better - otherwise you will have wasted the effort involved in ordering and testing it. Statistically, very few people will bother to order a new product just to confirm for themselves that it's no better than the one they have.)
 
Jun 25, 2015 at 10:46 AM Post #6,014 of 6,500
Most probably, all PCs/laptops older than 1-2 years do sound quite bad (and I guess that's what >90% of people use or have heard). Also agreed about the creative cards.

But that "extremely clear and detailed" chip you are mentioning is pretty much what I expect from a DAC. I want a DAC to deliver 100% neutral sound and maybe also a clear, 3D soundstage (although the soundstage is not exactly/entirely a DAC's responsibility). I want the exact same (neutrality) from my source and amp.
If I want to add that musical/enjoyable 'thingie' I'll try with speakers/HPs or maybe a tube preamp. Preamps & transducers are the components who add the most 'color' to the sound anyway and have the worse THN/etc measurements in any stereo chain ... all other components could be almost 100% transparent nowadays, no need to have any coloration from them.



P.S.
I am wondering if that so called "treble harshness" of the Sigma-Delta chips isn't just simple neutrality. I remember I read somewhere that a perfectly linear-to-20kHz component would sound quite harsh.
The very neutral/linear O2 fits that theory, it's highs aren't the most 'musical'. Also the most linear speaker I ever heard (and owned) did sometimes sound a bit too strong in the upper treble area ... e.g. the noisy-clocks intro of Pink Floyd's Time was not particularly enjoyable.

@judmarc
guess it's more clear now ... but in audioland ppl dont even agree 100% with mathematically proven theorems like Shannon's so I'll be still graying around those taps for a while
smily_headphones1.gif

 
As for that PS..... a "perfectly linear-to-20kHz component is going to sound "flat and linear" - terms like "harsh" are simply an interpretation of that result.
(I often hear the term "analytical" used to describe components in a negative way - when it really means "accurate" - which I seem to recall being the original definition of "high fidelity".)
 
If your recording sounds "harsh" when it is reproduced accurately, then perhaps the recording simply really does sound harsh.
(And, perhaps, some components that don't sound harsh with that recording are ALTERING IT by failing to reproduce whatever makes it sound harsh.)
 
I can think of a few reasons why a recording might sound harsh:
 
1) Some early A/D converters had poor quality band-limiting filters which produced non-flat frequency response, phase aberrations, and possibly even distortion - especially at high frequencies. SO perhaps some early digital recordings simply sound bad for "technical reasons".
 
2) Most modern multi-track recordings don't replicate the experience of actually being there live at all anyway. Specifically, high frequencies are attenuated by travel through air, which means that what a cymbal sounds like, even in the front row, is a lot different than what it sounds like one foot directly over the top of the drum set - which is probably where the microphone used to record it was placed. It's only reasonable to expect what the microphone records to sound like what you heard when it was located at the same general position as your ears. (If you've ever made actual live recordings, then you also know that, even if you put the microphone six inches in front of your nose, and don't do any processing at all, it's still difficult to get your recording to sound even close to what you heard. If the microphone is two feet in front of you, it's even more difficult. And a lot of the effort expended by mastering engineers is in the direction of "getting it back to where it belongs". Therefore, it is foolish to assume that the recording sounds exactly like the performance itself - even not counting your playback equipment.)
 
3) Cymbals are really loud - especially close up. This means that they tend to overload microphones, which may cause them to sound odd. It also means that they often have to be compressed, limited, and otherwise processed during mastering - which are all good reasons why they might sound "odd". Ditto for drums, which produce VERY powerful transients, which are, again, likely to overload a microphone or preamp, and are also likely to be pretty aggressively compressed and limited in the mix.
 
4) Finally, a lot of how we think about and describe things in general is based on previous experience. Perhaps what some people are describing as "harsh" is simply "correct", but most of us are so used to "rolled off and cleaned up" that we simply aren't used to accurate - and so it seems harsh". (Cymbals can sound mellow when someone is tapping them with a wire brush, but the last time I heard someone actually whack a cymbal, in a small club, without a rag or a damper of some sort on it, it was really loud, and pretty darned harsh.)
 
Jun 25, 2015 at 11:07 AM Post #6,015 of 6,500
   
I get that but I've seen them described in many ways, so it appears to be all relative. Not everyone hears this glare. So my point is that to get the bottom of this, the whole chain needs to be evaluated. 
 
I've seen folks listening to the HD800, DT880 and the like with that stuff and then be surprised (or not) that it sounds "hot", when those headphones measure with treble peaks and do sound like it. And I'm not knocking them for that, you and I have the HE-560, that some complain mostly about its lower treble peak. Headphones and recordings are known to vary more wildly in practice (audibly and measurably) than electronics. Can the electronics really be at fault if signs point towards the headphone or the recording, can it realistically just be the amp and dac's effect on the signal that's most audible after it's converted to sound by the headphone?


OK, understood, you are correct on the numerous variables present and the very subjective ways we all interpret them. I have never had a treble problem with the 560, others find it very annoying. 
 
My problem with the O2/ODAC is also subjective, tonality is critical to me, they just didn't sound natural with multiple headphones. Others love the Audeze "House Sound," I don't. While I love the 560 and HE6, many of those who like Audeze don't care for them. 
 
What I try to do on Head Fi and in reading the Audio Literature is find those with similar tastes, biases, and objectives, then learn vicariously through their experiences.
 

Users who are viewing this thread

Back
Top