Thoughts on a bunch of DACs (and why delta-sigma kinda sucks, just to get you to think about stuff)
Jun 4, 2015 at 3:05 PM Post #5,641 of 6,500
Well, I saw it, but it's kinda pricey... lol. I thought they could make something in the sub-1000$.


There are about 1000 sub-1000 amps that sound better than schiit :wink:. E.g. almost any modern receiver. And unless you wanna listen at kill-the-neighbors levels or have some funky super-demanding speakers, they all sound good and pretty much transparent.
It's not 1980 anymore and we are not in any schiit when it comes to amps :wink:
 
Jun 4, 2015 at 4:33 PM Post #5,642 of 6,500
It is far, far worse than this :frowning2: Yggy also depends on quantum mechanical theory that dates back to - like - the 1930s and it uses electricity :eek: This is tech that's more than a century old and electricity...are you kidding me...that's been around for like billions of years.

Yggy is so yesterday's tech and cannot be taken seriously.

Isn't it a fact? And this?--Yggy used the AD R2R DAC chip, because they want to use that old digital filter designed only for an R2R chip. Non-standard/common digital filters are proprietary=expensive.
 
Jun 4, 2015 at 4:55 PM Post #5,643 of 6,500
the reconstruction/anti-alias filter design is relatively independent of the DAC type for common audio DAC capable of the oversampling rate that allows the filter to be digital
 
but many monolithic DAC do have the filters built in and save chip area with half-band, multi-rate filters which constrains the filter designs a bit
 
in principle you could reproduce mike's magic filter's pass, stop and transition band, pre/post ringing as exactly as you like in even a delta-sigma DAC
 
of course the DS noise shaping makes any claim of "exactness" moot
 
 
but so does all of the audio signal chain's noise, required added dithers make the yggy's filter's "preserves exact samples" claim sound like magical thinking when the whole system is looked at
 
Jun 4, 2015 at 6:28 PM Post #5,644 of 6,500
Did anyone notice my post about the Krell Studio DAC that uses 2X PCM63-P and 2x Motorola DSP56001?
 
I think Mike M. had some competition back in the early 90's
 
Jun 4, 2015 at 6:33 PM Post #5,645 of 6,500
I didn't notice the chips used, but was certainly interested. That dac might be worthwhile if you can pick it up for $500-$600.
 
Jun 4, 2015 at 7:12 PM Post #5,646 of 6,500
Now that my play is over, it is with blinding speed that I comment on the ENOB exchange seen in this thread several pages back. Now, I may need to reread it, but the emphasis seemed to be on more bits equals more dynamic range. Fair enough, but there is much more involved.
 
Analog audio has increasing distortion with increasing level; digital audio has increasing quantization error (which translates as well to distortion)with decreasing level. The former, I argue is intuitive – the latter counter intuitive.
 
Just for the sake of a starting point, let us posit an analog signal to noise ratio of 72 db. It is a commonly accepted fact of analog radio voice communication that weak signals well down into the noise can be clearly understood. It is also clearly possible to hear subtleties and spatial cues into the noise on good analog recordings. In a 16 bit system, ther remain 4 bits worth of quantization. At this level, one has 4 bits of resolution which is a 1 part in 16 error, or 6.25%.
 
The way the Yggy works, we have 20 bit time and frequency domain samples inserted between the originals, which leaves 8 bits worth of quantization, with a 1 part in 256 error, or just under .4%. A lot better. This is exactly why Redbook 16/44.1 does not and will never scratch my itch.
 
I have been referring to the DSP in the Yggy as the megaburrito filter. In a recent conversation, Jason pointed out to me that it is really a megacomboburrito filter, since it uniquely optimizes time and frequency domains. This is what causes Yggy users, on a variety of systems to report hearing subtleties previously not experienced.
 
One more comment – I have received many requests for certain analog topologies to be incorporated into the Yggy. I also get questions on how I voice the Yggy with its chosen analog.
 
Please hear this – the Yggy has been deliberately designed with a DAC output so high only a buffer is required. This is significant because buffers tend to have far less perceptible sonic differences between them than gain stages. The means that the topology of the analog of the Yggy is as close to sonically irrelevant as possible. What you hear (or not) is chiefly the result of the digital stuff within. I believe that it is misguided (and really expensive) to attempt to “voice” your system with a DAC. There are many, many, amplifiers available to accomplish that.
 
The only reason to “voice” a DAC with analog is to cover up what your DAC does too much of or doesn't do at all. Kinda like makeup. A really beautiful girl does not need it.
 
Schiit Audio Stay updated on Schiit Audio at their sponsor profile on Head-Fi.
 
https://www.facebook.com/Schiit/ http://www.schiit.com/
Jun 4, 2015 at 7:14 PM Post #5,647 of 6,500
  Did anyone notice my post about the Krell Studio DAC that uses 2X PCM63-P and 2x Motorola DSP56001?
 
I think Mike M. had some competition back in the early 90's


The Theta Gen V balanced had 4 PCM63s, 3 DSP56001s, and the megacomboburrito filter at just over half the retail price, some two years before the Studio.
 
Schiit Audio Stay updated on Schiit Audio at their sponsor profile on Head-Fi.
 
https://www.facebook.com/Schiit/ http://www.schiit.com/
Jun 4, 2015 at 7:23 PM Post #5,648 of 6,500
The issue with sigma delta dacs is that they take 16bit/24bit input and converts them to Bitstream data @ 5bits(5bits for sabre/hugo/dave or less bits for other designs) 2.8MHz(for Ess Sabre)/104MHz(Dave). From what I understand this conversion process is quite destructive(time domain or samples wise) and is a lossy/decimation process. Also there's alot of complex feedback algorithm at work here(in the case of hyperstream), these feedback systems/noiseshaper algorithms are not fully understood by the dac designers(check out video of rob watts explaining noise shaping simulation below, perplexing even at his level). So much "black arts" involved in designing sigma delta dacs when a pragmatic dac designer can just stick to high precision R2R to get really good signals out of the decoder and design whatever filters codes/analog stage necessary to get good sound. The amount money spent on "DSP cores/FPGAs" noise shaping system is getting more expensive than just getting a highly precision R2R chip, not to mention with all these high Mhz cores, they might leak higher emi/rfi(electrical noise) into the surrounding audio components and require more filtering/pcb noise mangement/power conditioning and etc etc(increased cost/design time).
 
Rant: effing money grubbing texas instruments killing off the pre Burr-Brown true R2R chips.
 
Esstech patent(relates to Sabre Hyperstream modulator)
http://www.google.com/patents/US8350734
 
 

 
 

 
 
Jun 4, 2015 at 9:14 PM Post #5,649 of 6,500
July 4th we are having a meet here in Ottawa. At which will be a Yggy and my NAD M51. We are going to do some well designed, multiple subject, multiple trial blind listening tests. I'm getting a Yggy by years end, and while I suspect it will be a significant step up in sound quality I can't help but think some of this talk about DS being garbage is a little exaggerated. Seriously, to listen to some of the posts you would think that using a DS DAC is essentially the equivalent of listening to the cheapest Emerson receiver with nasty Radio Shack homemade speakers from the 70s. seriously, it sounds like people are suggesting this post-apocalyptic wasteland of sound degradation. While I have a DS DAC, frankly I don't build my identity around it, and I'm quite willing to accept that there are inherent flaws worth overcoming, but some posts here really make DS seem like a technology invented by degenerate freaks who failed community college electronics out to push garbage on people for crazy money. Enough already!
 
And yes, I did read the title of the thread.
 
Jun 4, 2015 at 10:17 PM Post #5,650 of 6,500
Will be interesting no doubt. Look forward to description of method and of course findings :popcorn:
 
Jun 4, 2015 at 10:40 PM Post #5,651 of 6,500
Will be interesting no doubt. Look forward to description of method and of course findings
popcorn.gif


Pretty straight forward, before the meet we are going to select a song by committee that will be used by everybody. The headphone will of course remain constant, the only part of the audio chain that will vary will be the DAC. We will correct as much as possible for volume and we will ensure that a single engaging, but comfortable volume level is used for all subjects.
 
The duration for each trial will be the same. Subjects will only be asked if the have a preference after each test cycle. We will use at least 5 subjects and 10 trials per subject. More would be ideal, but this is a meet so reasonable time need be allotted to avoid distracted/bored subjects.
 
We will make sure that the Yggy has been well warmed up. I know the owner will have had it for at least two weeks by then and he will be leaving it on the entire time. Yes there will be a brief time during transit to the meet that it is off for, but we will ensure that it is on for at least two hours before any testing begins. I can't imagine that being off for say 45 minutes after being on for a few weeks will skew the results too much.
 
I am not sure if I will use dummy tests where the DAC isn't changed. If so, we will tell subjects that it is possible that no switching may occur in some trials and that they don't have to identify if any pairing resulted in any differences. The subjects will be able to indicate no preference as a valid result.
 
I also think that if time permits we will use another group for sighted listening tests and then eventually compare the results. I am quite confident that sighted listening testing will yield more preferences for one DAC over the other. Will the difference be significant is the question. My wife has an advanced degree in experimental psychology and many years of clinical experience. I will have her run the results through statistical analysis. The university she works at (and that I attend) is an experimental one so I'm quite sure with our various contacts on campus we can get some advice about how to analyze the results.
 
I will make sure that there is zero contact of any sort between test subjects and myself as I will be responsible for switching the DACs as needed. The order of DACs presented will be selected randomly, e.g. which DAC goes first in a pairing, and whether or not there is a switch of DACs or not. Test subjects will be unable to see the rig at all. We will use a combination of a physical screen as well as a blindfold.
 
Jun 4, 2015 at 11:01 PM Post #5,652 of 6,500
If you intend to do a blind test, it's best to do with more than two dacs. If this is done with only two dacs, then it just might boil down to subjective listening preference of the listener as to preference of warmth or analytical voicing differences between the dacs. I would suggest two different well implemented R2R dacs vs two different well implemented sigma delta dacs then you can compare the results to see if the two r2r dacs get overall higher or lower scores/preferences vs sigma delta dacs.
 
Jun 4, 2015 at 11:33 PM Post #5,653 of 6,500
http://articles.chicagotribune.com/1990-09-28/entertainment/9003210974_1_chips-converter-disc

A throwback to a 1990s article which claim that Single bit sigma delta dacs are superior to traditional multi bit r2r. However it seems like even all modern sigma delta dacs have become multi-bit, why is it that? PSRR/electronic random noise.


This thesis is probably meant for gurus like mike moffat or arm chair dac designers:
https://krex.k-state.edu/dspace/bitstream/handle/2097/13537/ThomasWestonBurress%202011.pdf?sequence=1
 
Jun 5, 2015 at 12:44 AM Post #5,654 of 6,500
  Now that my play is over, it is with blinding speed that I comment on the ENOB exchange seen in this thread several pages back. Now, I may need to reread it, but the emphasis seemed to be on more bits equals more dynamic range. Fair enough, but there is much more involved.
 
Analog audio has increasing distortion with increasing level; digital audio has increasing quantization error (which translates as well to distortion)with decreasing level. The former, I argue is intuitive – the latter counter intuitive.
 
Just for the sake of a starting point, let us posit an analog signal to noise ratio of 72 db. It is a commonly accepted fact of analog radio voice communication that weak signals well down into the noise can be clearly understood. It is also clearly possible to hear subtleties and spatial cues into the noise on good analog recordings. In a 16 bit system, ther remain 4 bits worth of quantization. At this level, one has 4 bits of resolution which is a 1 part in 16 error, or 6.25%.
 
The way the Yggy works, we have 20 bit time and frequency domain samples inserted between the originals, which leaves 8 bits worth of quantization, with a 1 part in 256 error, or just under .4%. A lot better. This is exactly why Redbook 16/44.1 does not and will never scratch my itch.
 
I have been referring to the DSP in the Yggy as the megaburrito filter. In a recent conversation, Jason pointed out to me that it is really a megacomboburrito filter, since it uniquely optimizes time and frequency domains. This is what causes Yggy users, on a variety of systems to report hearing subtleties previously not experienced.
...

 
Yeah, dynamic range was the main point, as I understood it.
 
blink.gif
 
confused.gif
 I'm struggling to 'get my head around' your explanation. I kind of get that you're saying that digital requires 'spare' bits, for quantisation. Quantisation requires the same amount of bits, whether it's 16, 20, 24... bit audio. So, the less bits you have to begin with, the lower the percentage of 'usable' bits that are available. Am I correct?
 
So, what will your Yggy do with my 16/44.1 music files? Will it stuff them with 'false' data, aka upsampling?
 
Please keep your explanation simple and, if possible, use some analogies; I don't mind if you pretend I'm like...5 years old!
 
Thanks.
 
Jun 5, 2015 at 12:57 AM Post #5,655 of 6,500
I think the problem with all the theorist and those that strictly follow the Nyquist theory(44.1Khz is enough to capture/reproduce everything in audio) or there abouts(think of those who subscribe to hydrogen audio and etc) forget that there's a little thing in life/engineering known as "headroom".
 
In theory, you can design a 1.8m height door for a 1.7999m high human being to use, but do you think the human being going through the door can do it fast and smooth? This is not taking into account thermal expansion of the door/wall/floor.
 
Likewise 44.1KHz makes it diffcult on the digital/analog filters, and no filters(*not sure if it includes yggdrasil) in the world can perfectly filter without causing all sorts of unwanted distortions like gibbs effect, post-ringing, pre-ringing, phase distortion and etc.
 
There's a 1990s AES article which they tried to push for 48KHz as the replacement(DVD Audio) standard for CD audio, because it relaxes the requirements of the D/A filtering component.
 
As for the bits part... think of it as making bread. By right if you using 16kilograms of yeast, you should make 16kilograms of bread. However during the process of making bread, you lose some of the yeast somewhere/somehow and the end result is the bread is 14kilograms(or thereabouts) thus not bit perfect. You would want more yeast(bits) to begin with so as to recreate the bread at 16kilogram.
 
There's this famous baker know as Sabre, he takes 24kilograms of yeast, but only produces about 15kilograms worth of bread.
 

Users who are viewing this thread

Back
Top