Woah! Tyll will be the first professional reviewer to take on the Schitt Yggdrasil! http://www.innerfidelity.com/content/big-sound-2015-and-so-it-begins

Most definitely. OT but it will be interesting to hear how Eddie Current BW holds its own amongst such company too...

agreed that high end DACs are more about individual tastes but the review about Hugo is surprising as in last one year only this is the negative review of hugo i have read at least for sq and that too for the Treble for which it is most regarded amongst the high end dac. not many reviews of yggdrasil vs Hugo are on the net but I have read some where that yggdrasil has bright Treble which won't suit the every recording. hugo on the other hand differentiate between poor and good recording which but in a more transparent way. it adds nothing to either Treble or bass just refined clean flow of music ! no wonder one reviewer on headfi preferred Hugo with lcd 3 over yggdrasil with lcd 3 with same headphone amp. so my suggestion tread with caution for over enthusiastic reviews of yggdrasil and audition both specially with more dynamic classical recordings. also 2qute is same as Hugo and priced much below yggdrasil.

I'm sorry if I missed this, but which negative review of the Hugo did you read. I would very much like to see that. Thanks.

the Fourier Transform is just another tool to look at the data with - shows some things to human eyeballs, our visual pattern recognition that are harder to see in the time domain data but it seems popular for some to mischaracterize it - a common naive criticism is that the Fourier data is "averaged" - like the carpet smoothing the "bumps" - implying something is "lost" the conventional understanding of averaging does involve information loss - and therefore isn't a useful way to think about the Fourier Transformed data - because the Fourier full complex data is a Mathematical Dual of the Time Domain data you can convert the full complex fft phase and magnitude data points back and forth to the time series data with accuracy limited only by your computer arithmetic wordlength a good illustration of the Dual property is that a "single point" Diac impulse in the time domain gives a result in every fft bin with constant magnitude and a smooth phase variation that encodes the time occurrence of the time domain impulse the Dual situation is the impulse/single point in the fft plot represents a smooth continuous sine wave with differing values in every time domain data point - the sine phase relative to the data record start coming from the arg of the complex fft data point but all of the information is preserved in either view - just what is localized is different - and is the power of looking at the data in both domains

Nice explanation - the 'averaging' that occurs (without data loss) is because the FFT is a histogram, a point lost on not a few people who present data with it as if its a graph. So merely being presented an FFT plot hides some vital information - the bin bandwidth. The various mis-presenters then go on to point to a 'noise floor' (when the 'line' of grass is fairly straight and horizontal) and claim that's the noise floor of whatever they were measuring.

I don't find your distinction any more or less relevant than saying a digital audio time series single number is a "bin" of the continuous signal "average" over the intersample time interpreting fft bins does require knowing record length in addition to sample rate because frequency is inherently a distributed in time concept but that really isn't "extra" knowledge - count the bins and look at the frequency axis labels - no different from getting time domain record length form sample rate and number of points with effective bandlimiting, complying with the Nyquist limit, the digital audio time series of numbers are a practical, useful, even sometimes musically satisfying representation of the continuous signal with Shannon-Hartley Channel Capacity Theorem the Analog noise floor also puts a practical limit on the bit depth needed “So merely being presented an FFT plot hides some vital information" doesn't seem to be relevant to engineers applying the tool to find evidence for or against Delta Sigma possible error vs R-2R DAC it really doesn't seem to be useful to harp on the limitations poor popularized presentations in a way that could easily discredit a valuable tool in the minds of people without the speciallized education – as the carpet analogy seems to be trying to say

In the end, I agree with you even if I have my different way of looking at essentially the same thing and I personally think headphones still have a lot of ways to go, easily more than any other kind of gear. I think the booming VR development is going to advance research in HRTF binaural technology so I expect to see some amazing stuff soon on the hardware side and software.

Thanks for this explanation - very interesting! Notwithstanding that it sounds like Otala's theory of unavoidable TIM from feedback amps has been disproven, would you say that there is general acceptance for Pass' comments in the article I previously linked - that negative feedback - while dramatically lowering overall distortion, noise, output impedance, etc. - increases non-linear distortions and higher order distortions? (See, e.g. figures 10 and 11 - https://passlabs.com/articles/audio-distortion-and-feedback ) I have sort of taken this article as gospel as to what is measurable (audible in controlled tests being a separate issue), but would be interested to know if Pass' assertions are controversial among audio engineers. I know nwavguy emphatically argued that 1.5V/us should be adequate for any headphone amp and then basically doubled that in designing the o2. After reading bits and pieces about TIM and slew rate though, I ended up building agdr's booster circuit and swapping out the input opamp for a pair of lme49990's, which bumped it up to around 20V/us (see http://www.head-fi.org/t/616331/o2-amp-odac/2145#post_10350437 - which makes the modded o2's slew above your referenced baseline and just below that of the se-se wire). Seems to sound slightly better to me than stock, although it very well could just be confirmation bias, a result of lesser dc offset, whatever. At the end of the day, for those of us with more science interest than ability, it is really hard to know what to believe about this hobby, manufacturer descriptions & specs, and the aspects of design that really matter most.

That's my point - when presented with an FFT as a graph (rather than as a histogram) where is the information about the record length and the sample rate employed (not to mention the number of averages performed)? Some presentations do give some of that information, few give all of it. Without that data its not possible to 'count the bins'. In a time domain graph (say a scope plot) its normal to see both axes labelled (V, t). I'm not out to descredit the FFT as a valuable tool here - I agree its useful and powerful. More to point out that a large number of users of it don't thoroughly understand it when employing it.