Thoughts on a bunch of DACs (and why delta-sigma kinda sucks, just to get you to think about stuff)
Jun 22, 2015 at 10:53 AM Post #5,941 of 6,500
The realtek device had a 1.4db imbalance through the first few tests. And that made it identifiable. Not anymore after they corrected it. It was a clear mistake, but it just makes the whole thing even more credible & hard to swallow.

And about thining those $10K, I am not 100% sure what you meant but I am open to suggestions ... e.g. for a start please tell me how to "thin" this

P.S.
as about Lingling's&co public displays of love, I am a shy guy and such things make me blush. Sorry guys, I'm still not the subject of this thread. But if you really cannot stop those effusions, just open a new thread and/or feel free to PM ... btw, I'm very partial to logic and beautiful audio gifts
wink.gif


Thought you had 10k in headphones. When it comes to speakers I'm not qualified sorry :) .
 
As to the test, this is the straw that breaks the camel's back :
 We also want to explore this codec's output impedance. At 77 Ω for the recommended implementation, it is by far the highest (almost by an order of magnitude over the second-highest) in our round-up. Is that a factor in the real world?

 
What ? 77Ohms and they couldn't hear a difference on the HD800 ? These guys are either deaf or they don't know how to connect their set-up properly.
 
Jun 22, 2015 at 11:49 AM Post #5,942 of 6,500
I looked at Tyll's InnerFidelity HD800 Z plot - say 350 mid audio, 650 at the ~ 100 Hz bass bump
 
then do the divider math with 77 Ohms and I get 0.75 dB  - not immediately obvious that you can tell from Clark's ABX threshold plot for ~2 Octave and 100 Hz center (you have to visually interpolate adding to the guesswork)
 

 
Clark, David L., "High-Resolution Subjective Testing Using a Double-Blind Comparator", Journal of the Audio Engineering Society, Vol. 30 No. 5, May 1982, pp. 330-338
ABX]http://home.provide.net/~djcarlst/abx_crit.htm]ABX Amplitude vs. Frequency Matching Criteria
 
 
 
certainly the bigger needed correction is for average level: 20*log10(350/(350+77)) = -1.7 dB
 
Jun 22, 2015 at 11:55 AM Post #5,943 of 6,500
  Excellent - and very informative. However, for all its detail and everything it covers, bear in mind that this is still a "basic and general document", and so covers a lot of areas in relatively shallow detail (and a lot of it isn't especially relevant to audio applications).
 

 
Here's another pdf(see end of page 44 onwards) that covers the audio related implementation of zeroth-order hold, analog filters and anti-alias filters:
 
http://www.analog.com/media/en/technical-documentation/dsp-book/dsp_book_Ch3.pdf
 
 
I think we also have to note why analog is giving very basic information is because(quote from last page of pdf):
 While these explanations and examples provide an introduction to single bit ADC and DAC, it must be emphasized that they are simplified descriptions of sophisticated DSP and integrated circuit technology. You wouldn't expect the manufacturer to tell their competitors all the internal workings of their chips, so don't expect them to tell you. 

 
Jun 22, 2015 at 12:02 PM Post #5,944 of 6,500
Also it's quite intriguing on this quote from the pdf I just posted: (page 57)
 
It is important to understand that none of these options will allow the original signal to be reconstructed from the sampled data. This is because the original signal inherently contains frequency components greater than one-half of the sampling rate.
 
Since these frequencies cannot exist in the digitized signal, the reconstructed signal cannot contain them either. These high frequencies result from two sources: (1) noise and interference, which you would like to eliminate, and (2) sharp edges in the waveform, which probably contain information you want to retain.
 
The Chebyshev filter, shown in (b), attacks the problem by aggressively removing all high frequency components. This results in a filtered analog signal that can be sampled and later perfectly reconstructed. However, the reconstructed analog signal is identical to the filtered signal, not the original signal.
 
Although nothing is lost in sampling, the waveform has been severely distorted by the antialias filter. As shown in (b), the cure is worse than the disease!
 
Don't do it! The Bessel filter, (c), is designed for just this problem. Its output closely resembles the original waveform, with only a gentle rounding of the edges. By adjusting the filter's cutoff frequency, the smoothness of the edges can be traded for elimination of high frequency components in the signal. Using more poles in the filter allows a better tradeoff between these two parameters.
 
A common guideline is to set the cutoff frequency at about one-quarter of the sampling frequency. This results in about two samples 58 The Scientist and Engineer's Guide to Digital Signal Processing along the rising portion of each edge.
 
Notice that both the Bessel and the Chebyshev filter have removed the burst of high frequency noise present in the original signal.  

 
How did Schiit managed to do it with their closed form filter.. It's kind of a mystery.
 
Jun 22, 2015 at 12:11 PM Post #5,945 of 6,500
the DAC's filter can't do anything about the mic-pe-filter-ADC chain
 
studio practice is to capture at higher rez and they have more choices that you have no control over in the downsampling decimation/filter and wordlength reduction process - dither is also near universally used today to produce 16/44 release from the studio's internal format(s)
 
Moffat went ballistic when several commented on dither and noise kinda making "bit perfect, dammit" claims rather curious
 
Jun 22, 2015 at 12:26 PM Post #5,946 of 6,500
Thought you had 10k in headphones. When it comes to speakers I'm not qualified sorry :) .
As to the test, this is the straw that breaks the camel's back :
What ? 77Ohms and they couldn't hear a difference on the HD800 ? These guys are either deaf or they don't know how to connect their set-up properly.


Dont know about deaf but I'm pretty sure they know how to connect & test a few HW components. Dont have access to such a Realtek chip so I cannot say whether that dumb output impedance really affects the hd800. AFAIK, the hd800 works pretty well with OTL amps (e.g. bottlehead) which may have an even bigger output impedance.


KeithEmo
100% with you about the computer sounchips of yesteryear ... they were all quite cr*ptastic, you could hear all sorts of PC noises, even the mouse movement in some cases. However, the last gen chips & implementations seem to be a diff game ... more details in another headfi thread and from a PC forum ... and a sample lastgen implementation. There are many comments on headfi and elsewhere stating that those things sound quite a lot better than the usual MB chips of yore and it could very well be that (at least) some implementations are already transparent/inaudible.
Times they are a changing :)

P.S. people and biases not so much/fast apparently
 
Jun 22, 2015 at 12:36 PM Post #5,947 of 6,500

 
Out of interest what is your 10K speaker rig? Does it include a stand-alone DAC? Nothing wrong with questioning things, in general and even more so in audio. It's just unusual to see people with this level of investment having doubts about how computer chips vs good stand-alone DACs comparatively perform.
 
Jun 22, 2015 at 1:20 PM Post #5,948 of 6,500
Out of interest what is your 10K speaker rig? Does it include a stand-alone DAC? Nothing wrong with questioning things, in general and even more so in audio. It's just unusual to see people with this level of investment having doubts about how computer chips vs good stand-alone DACs comparatively perform.


May be quite unusual, I do not know ... personally, I just do not take anything for granted .. and even less when it comes to hifi :)

Anyway, I'm not the one questioning DACs here, it's Tom's review. All I want to know is if anyone could find some serious issues with that review ... cause otherwise it is right and I gotta start questioning some of my audio investments ... at least some future ones. I did post some details about my setup in various threads (including a link not far above) but I'll just PM you cause there is no need to brag here.
 
Jun 22, 2015 at 1:44 PM Post #5,949 of 6,500
http://www.ti.com/lit/an/slaa523a/slaa523a.pdf
 
page 11:
PLL Clock Mode The PLL clock mode allows for the user to input a reference clock at the data rate. The internal VCO/PLL will than generate the higher frequency DAC clock from the reference clock. This mode reduces system cost and complexity by allowing the designer to use the DAC without the need for a higher speed clock. However, often the PLL/VCO option generates more phase noise than an external clock. This added phase noise will affect the DACs SNR and SFDR performance. 

 
It seems like PLL/VCO clocking adds phase noise.
 
Jun 22, 2015 at 2:34 PM Post #5,950 of 6,500
 
Hugo and Dave don't use any kind of DAC chip, the analogue conversion is discrete using pulse array. The key benefit of pulse array - something I have not seen any other DAC technology achieve at all - is an analogue type distortion characteristic. By this I mean, as the signal gets smaller, the distortion gets smaller too. Indeed, I have posted before about Hugo's small signal performance - once you get to below -20 dBFS distortion disappears - no enharmonic, no harmonic distortion, and no noise floor modulation as the signal gets smaller. With Dave, it has even more remarkable performance - a noise floor that is measured at -180dB and is completely unchanged from 2.5v RMS output to no signal at all. And the benefit of an analogue character? Much smoother and more natural sound quality, with much better instrument separation and focus. Of course, some people like the sound of digital hardness - the aggression gets superficially confused with detail resolution - but it quickly tires with listening fatigue, and poor timbre variation, as all instruments sound hard, etched and up front. But if you like that sound, then fine, but its not for me.
 
On the digital filter front - original samples getting modified - actually the vast majority of FIR digital filters retain untouched the original samples, as they are known as half band filters. In this case, the coefficients are arranged so that one set is zero with one coefficient being 1, so the original sample is returned unchanged. The other set being used to create the new interpolated value. The key benefit of half band filters is that the computation is much easier, as nearly half the coefficients are zero, plus the filter can be folded so that the number of multiplications is a quarter of a non half band filter. When designing an audio DAC ASIC, the key part in terms of gate count is the multiplier, so reducing this gives a substantial improvement in die size, and hence cost. So traditional digital filters use a cascade of half band filters, each half band filter doubles up the oversampling - so a cascade of 3 half band filters will give you an 8 times over-sampled signal, with one sample being the unmodified original data. You can tell if the filter is like this as at FS/2 (22.05 kHz for CD) the attenuation is -6dB. The filters that are not like this are so called apodising filters, and my filter the WTA filter.
 
Going back eighteen years ago to the late 90's I was developing my own FIR filter using FPGA's. Initially, I was interested in increasing the FIR filter tap length as I knew from the mathematics of sampling theory that timing errors were reduced with increasing tap length. So the first test was to use half band Kaiser filters - going from 256 taps to 2048 taps gave an enormous sound quality improvement, so I had confirmed that tap length was indeed important subjectively. But at this point I was stuck; I knew that an infinite tap length filter with a sinc impulse response would return the original un-sampled signal perfectly - but the sinc function using only 16 bit accurate coefficients needs 1M tap FIR filter - and that would never happen, certainly not with 90's technology. So was it possible to improve the timing accuracy without using impossible tap lengths? After a lot of thinking and research, I thought there was a way - but it meant using a non half band filter, which would mean that the original sampled data would be modified. This was a big intellectual stumbling block - how can changing the original data be a good thing? But the trouble with audio is that neat simplistic ideas or preconceptions get in the way. Reality is always different, and reality can only be evaluated by a careful AB listening test. So I went ahead on this idea, and listened to the first WTA filter algorithm - and indeed it made a massive improvement in SQ - a 256 tap WTA sounded much better than 2048 tap half band Kaiser, even though the data is being modified. Why is this? The job of a DAC is NOT to reproduce the data it is given, but to reproduce the analogue signal before it is sampled. The WTA filter reconstructs the timing of the original transients much more accurately than using half band filters or filters that preserve the original data and it is timing of transients that is the most important SQ aspect.
 
So the moral of the tale? Don't let a simplistic technical story get in the way of enjoying music!                
  
 Rob
 


 
Looks like Rob Watts is claiming that his WTA Filter is better than filters that preserve the original data(aka Schiit Closed form filter)?
 
And this:
Quote:
 The job of a DAC is NOT to reproduce the data it is given, but to reproduce the analogue signal before it is sampled. 

 
How does a DAC know what else to reproduce other than the data it is given(e.g. garbage in, garbage out)??? Unless Rob Watts have some kind of method/maths which compensates for analogue to digital converter's signal loss? This sounds like MQA type of solution.
 
http://www.audiostream.com/content/mqa-ltd
 
 So the ideal MQA story begins in the recording studio where the more information gathered about the specific equipment used, including and most importantly the A/D converter, the more the MQA technology can correct for the sonic anomalies found in these devices. Even ideally-er is for the A/D conversion to happen inside an MQA converter 

 
Jun 22, 2015 at 2:51 PM Post #5,951 of 6,500
May be quite unusual, I do not know ... personally, I just do not take anything for granted .. and even less when it comes to hifi
smily_headphones1.gif


Anyway, I'm not the one questioning DACs here, it's Tom's review. All I want to know is if anyone could find some serious issues with that review ... cause otherwise it is right and I gotta start questioning some of my audio investments ... at least some future ones. I did post some details about my setup in various threads (including a link not far above) but I'll just PM you cause there is no need to brag here.

 
Again, I wouldn't disagree with the review. The amping issue is a legitimate one. I know they were using built-in amps on the Xonar and DAC2, but that fact remains, they still are using different amps. Having owned the Xonar a while back, I know the headamp on that card sucks. The Xonar's LOs into a good headamp sounds much better. I would also assume the same for the DAC2: I'm sure the head-amp was more of an afterthought than the DAC circuitry. The OBJ2 amp: that amp is well known. So basically the comparison is this when you break things down:
 
1) Motherboard DAC/amp out
2) Xonar DAC | Xonar headout (with crappy headphone chip)
3) BM DAC2 DAC | DAC2 headout (probably not as much attention paid to it compared to DAC section so likely a bottleneck)
4) ODAC | Objective 2 headamp
 
Of the tests, the only consistently identifiable setup was the motherboard out. I am not surprised. Basically you are comparing three mediocre / low-end setups and one really crappy one (which was easily identified.) It was a good test, but the conclusions are flawed. Also, it's obvious there was a huge confirmation bias thing going in (the photo of the Mcintosh tube amp which had nothing to do with the test is huge red flag of nwavuy syndrome) 
 
Even they admitted they were amateurs. It would be like me comparing 5x7 prints from $3000 Nikons and $150 point-and-shoot cameras. Probably look all the same to me.
 
Jun 22, 2015 at 3:11 PM Post #5,953 of 6,500
  I looked at Tyll's InnerFidelity HD800 Z plot - say 350 mid audio, 650 at the ~ 100 Hz bass bump
 
then do the divider math with 77 Ohms and I get 0.75 dB  - not immediately obvious that you can tell from Clark's ABX threshold plot for ~2 Octave and 100 Hz center (you have to visually interpolate adding to the guesswork)
 

 
Clark, David L., "High-Resolution Subjective Testing Using a Double-Blind Comparator", Journal of the Audio Engineering Society, Vol. 30 No. 5, May 1982, pp. 330-338
ABX]http://home.provide.net/~djcarlst/abx_crit.htm]ABX Amplitude vs. Frequency Matching Criteria
 
 
 
certainly the bigger needed correction is for average level: 20*log10(350/(350+77)) = -1.7 dB

 
@JCX
Well they actually did hear a difference since they supposedly corrected the volume matching later. Additionally they actually managed a 100% result on the Daft Punk track, but it didn't seem to satisfy them so they redid another test which failed, I'm still scratching my head on that. Besides that, well what Purrin said.
 
The output impedance is one of the reasons why people use OTL amps with the HD6X0 and HD800, to get the bass bump.
 
Jun 22, 2015 at 3:19 PM Post #5,954 of 6,500
Originally Posted by frenchbat /img/forum/go_quote.gif
 
Well they actually did hear a difference since they supposedly corrected the volume matching later. Additionally they actually managed a 100% result on the Daft Punk track, but it didn't seem to satisfy them so they redid another test which failed, I'm still scratching my head on that. Besides that, well what Purrin said.

 
Indeed, another sign of confirmation bias. I use the Daft Punk record a lot of for evaluating gear. The material on it is challenging for a lot of systems. Lots of bass, start-stop, low-end extension, effects, difficult waveforms, and even surprisingly good plankton.
 
They get 100% result the first time, so they need to call "BS" - so they to try again (after their ears are fatigued) so they can do worse. A proper test would have been to conduct each test (not just he Daft Punk) several more times, perhaps on separate days.
 
Confirmation bias goes both ways.
 

Users who are viewing this thread

Back
Top