or Connect
Head-Fi.org › Forums › Head-Fi Special Forums › Head-Fi Bloggers › Jason Stoddard › Schiit Happened: The Story of the World's Most Improbable Start-Up
New Posts  All Forums:Forum Nav:

Schiit Happened: The Story of the World's Most Improbable Start-Up - Page 91

post #1351 of 14430
Thread Starter 
Quote:
Originally Posted by bowerymarc View Post
 

wow, that's an incredibly creative rewrite of digital audio history... but not particularly accurate.  Here are some readily verifiable facts:

 

1. 44.1KHz sample rate was chosen well before CDs were invented.  This was during the first era of consumer digital recorders, which recorded on videocasettes.  That rate turns out to allow you to hold 3 stereo samples per video line, and still give you a little more than 2KHz transition band for antialiasing filters.  Redbook adopted 44.1K/16b because sony won the first format war (philips wanted 44.056Khz), and they wanted to leverage all the recordings already made in that format.

 

2. those devices had a choice of 14 or 16 bits as there were two competing formats, from sony and philips.  The philips format was 14bits, which by the way, was still significantly better than any analog tape recorder of the time (in theory).

 

3. before these recorders, digital audio recorders were already being used professionally.  (A popular sample rate back then was 50KHz).  Apogee Electronics got their start back then making high quality antialiasing filters for some of these machines.

 

Most of these used SAR DACs and ADCs, which was really the only way to get >14 bits.  There is no way to match resistors closely enough to do R-2R DACs with that resolution, 

Regardless, both types suffer from large amounts of differential nonlinearity, well over 1LSB, which is much more audible and objectionable than large amounts of integral nonlinearity.

 

Digital oversampling filters were introduced not to cut cost (those chips were quite expensive at the time) but to improve the whole process of antialiasing.  It allowed the freedom for the analog filters to have much more desirable characteristics, like lower Q (less ringing), better headroom, less critical component matching requirements, etc.

 

Sigma-delta converters ICs were introduced in the 80's not to cut cost (the first ones, manufactured by dbx, were very expensive) but to improve audio quality.  They had superior specs and sound, due mostly to the lack of any measurable differential nonlinearity, and very smoothly shaped integral nonlinearity.

 

It turned out that, once the theory of sigma delta converters was better understood by the engineering community, the process to manufacture them could be made far cheaper than high-resolution SAR converters.  But part of that is just the march of technology, as it's possible to get very good, inexpensive SAR converters these days, based on some of the same technological advances that were driven in part by development of S-D converters.

 

So, the cost reduction followed the innovation, but wasn't the impetus for it. These advances were done by engineers dedicated to improving the audio experience (even if they didn't always succeed).

 

The main downside with most current SD converters is they do not have very good accuracy at DC (though there are some around that do), and they have a few mS of latency due to digital filtering which can interfere with industrial uses such as motor control applications where they are in the feedback loop. 

 

Bottom line is, for conversion to/from analog, one needs antialiasing and anti-imaging filters and a sampler/quantizer.  Almost from the beginning, the filtering process has been a combination of analog and digital filtering.  S-D converters are a result of looking at the theory and coming up with a much more elegant solution, though one that because of the heavy duty math involved, isn't exactly intuitive.

 

I'm still scratching my head at what bitperfect is supposed to mean in this context.  I can't come up with any logical explanation.  Analog signals do not have any 'real bits' or 'intrinsic bits' hidden inside them...

 

Perhaps I should have titled it "an irreverent history of consumer digital audio."

 

Aannnd--I went through all of this with Mike in the late 80s and early 90s, and I stand by the cost-cutting rationale. All you had to do was compare the prices of, say, a PCM67 and a PCM58, and there you go. Or listen to a sales pitch from a field engineer in those days--"cheaper, easier" was what they said. Or look at the complexity of an early brickwall filter (not to mention the matching/tolerances/etc needed to do one well)--manufacturers absolutely wanted to get rid of those things.

 

To be clear, I'm not lobbying for the return to some imaginary perfect past. There has been great progress in digital sound, on all fronts, and without digital filters and sigma-delta, we wouldn't have $0.80 chips to put into phones.

post #1352 of 14430
Quote:

Originally Posted by Jason Stoddard View Post

 

Funny convo at TheShow

 

Lol, I remember that.  :D  We got a pretty good laugh from that one.


Home of the Liquid Carbon, Liquid Crimson, Liquid Glass, Liquid Gold and
Liquid Lightning headphone amplifiers... and the upcoming Liquid Spark!

Reply
post #1353 of 14430

I agree that the term "bitperfect" is being used really imprecisely here. It makes sense if you're talking about ASRCs on the inputs of DACs. You could claim that ASRCs are not bitperfect because they usually do not present the bits as they were given at the input to the DAC chip, and that is exactly how they are designed to work. But the term gets fuzzy when you talk about what is presented to the DAC.

 

A ladder DAC running at base frequency, I guess you can say the samples are bitperfect as they are presented to the ladder array, but then you have the high-order analog brickwall filter problem. The analog brickwall problem is far worse than anything bitperfectness might solve.

 

You can say that a DAC with an oversampling chip in front of it (as virtually every high quality DAC in certain decades worked) may not have received the samples at the ladder array as they were at the input of the box, because the digital oversampling filter may output different samples at the time points occupied by the original samples. But then again, maybe not. There is nothing in the math of oversampling that prevents the original samples from going through unscathed, accompanied by 7 of its closest friends for an 8x oversampler. In fact, most oversamplers zero-fill the extrapolated positions so that the FIR filter that does the upsampling afterwards could be simplified.

 

From a linear systems POV, there has to be a brickwall filter at half the sampling rate. What the various different schemes do is to split the workload of the brickwall filter across analog and digital domains so that it is a more tractable problem. How well that split works depends on how closely a system hews to its linearity. And yes, sometimes that saves money. Cheaper is not the same as bad, and expensive is not necessarily good.

 

And for the delta-sigma DAC architectures, one could argue they don't preserve bitperfectness, but since the operation of the DAC is intrinsically tied to how it works with the samples, I'm not sure this is such a meaningful thing to say. The chip is presented with the original samples at its input. How far into the chip do you need to carry bitperfectness for bitperfectness to matter? How much wood could a woodchuck chuck?

 

Finally, the real question is whether there is any sound quality reason for having the same samples at the input as at some arbitrary point within the DAC chip. I think that if there is, then there needs to be a quantitative metric for what this means beyond a WIBNI (Wouldn't It Be Nice If ...) philosophy. When can the DAC start to change the samples?

 

The math of reconstruction does not preclude the original sample values from the final waveform, but that is tautological. If the DAC were at all accurate, then the samples at the same relative time points of the output analog waveform has to be identical to the input (modulo a scaling gain factor).

post #1354 of 14430
Quote:
Originally Posted by azteca x View Post
 

Ashton Kutcher as Jason Stoddard.

 

So now I'm trying to picture the Schiit team as the cast of That 70's Show... Jason as Kelso (maybe more Eric?), Eddie as Hyde, Rina as Jackie? Mike would be... uh, Red?

 

 

Quote:
Originally Posted by Jason Stoddard View Post
 

Don't get excited--if we do it, it will be STUPID expensive. I doubt if we'd replace many studio ADCs with it. It would really be for nutcases.

 

I hear those Daft Punk guys are kinda crazy for gear...

 

 

 

 

post #1355 of 14430
Thread Starter 
Quote:
Originally Posted by AndreYew View Post
 

There is nothing in the math of oversampling that prevents the original samples from going through unscathed, accompanied by 7 of its closest friends for an 8x oversampler. In fact, most oversamplers zero-fill the extrapolated positions so that the FIR filter that does the upsampling afterwards could be simplified.

 

...

 

The math of reconstruction does not preclude the original sample values from the final waveform, but that is tautological. If the DAC were at all accurate, then the samples at the same relative time points of the output analog waveform has to be identical to the input (modulo a scaling gain factor).

 

I'd love to see some examples of other audio digital filters with closed-form solutions (that is, that preserve the original samples, and act as a true interpolator.) As far as I know, and as far as Mike knows, there aren't any, except our algorithm. 

post #1356 of 14430
Quote:
Originally Posted by Jason Stoddard View Post
 

 

Perhaps I should have titled it "an irreverent history of consumer digital audio."

 

Aannnd--I went through all of this with Mike in the late 80s and early 90s, and I stand by the cost-cutting rationale. All you had to do was compare the prices of, say, a PCM67 and a PCM58, and there you go. Or listen to a sales pitch from a field engineer in those days--"cheaper, easier" was what they said. Or look at the complexity of an early brickwall filter (not to mention the matching/tolerances/etc needed to do one well)--manufacturers absolutely wanted to get rid of those things.

 

To be clear, I'm not lobbying for the return to some imaginary perfect past. There has been great progress in digital sound, on all fronts, and without digital filters and sigma-delta, we wouldn't have $0.80 chips to put into phones.

Well, if you qualify it by 'consumer', you're probably not talking about the audiophile companies then, who were some of the first adopters of that technology, and did it solely because they were audibly superior.  Consumer audio is mass market, so of course they want cheap price.  One of the first consumer SD DACs was made by philips.  Of course, they were their own best customer, so they could crank out 1M of those chips, and make them cheap (per piece - not factoring in the huge front end investment).  But, the reason they invested megabucks in that technology was because the result was superior in every way to the preceding technology.  I guess I'm saying, I still disagree that cost cutting was the primary motivation.  

 

Btw, I'm really enjoying the series.  Especially the tales of inevitable bugs and glitches along the way... been there!

post #1357 of 14430
Quote:
Originally Posted by Jason Stoddard View Post

 

I'd love to see some examples of other audio digital filters with closed-form solutions (that is, that preserve the original samples, and act as a true interpolator.) As far as I know, and as far as Mike knows, there aren't any, except our algorithm. 

 



Im under the impression that the Arc Prediction Method imployed by XXHighEnd is similar in this regard, and this is one more reason why I think the Yggy will be truely special, similar oversampling but no beast of a computer required, all done on a DSP in the DAC.
post #1358 of 14430
Quote:
Originally Posted by bowerymarc View Post
 

Well, if you qualify it by 'consumer', you're probably not talking about the audiophile companies then, who were some of the first adopters of that technology, and did it solely because they were audibly superior.  

 

On the other hand, a very respectable number of Hi-End companies, especially so States side, but not exclusively, have never adopted 1-Bit topologies. Some have done so briefly, only to return to Good Old Fashioned Multi-bit. A notable example is Naim, that have remained steadfastly on the Multibit side, and have only very recently- relunctantly, I am sure- employed Delta-Sigma chips in their latest Digital gear. There must have been very good reasons for this, and certainly not those of economics...


Edited by rocksteady65 - 6/12/14 at 8:13pm
post #1359 of 14430
Quote:
Originally Posted by Jason Stoddard View Post
 

 

I'd love to see some examples of other audio digital filters with closed-form solutions (that is, that preserve the original samples, and act as a true interpolator.) As far as I know, and as far as Mike knows, there aren't any, except our algorithm. 

 

I'm not sure I understand what you mean by closed-form? The theoretically perfect sinc interpolator (ie. the perfect brickwall filter) is a mathematically closed-form solution: you can write it out exactly without approximation. It will also fulfill your criteria for preserving the original samples, since it perfectly interpolates the original waveform. I'm probably missing something here ...

post #1360 of 14430
Thread Starter 
Quote:
Originally Posted by AndreYew View Post
 

 

I'm not sure I understand what you mean by closed-form? The theoretically perfect sinc interpolator (ie. the perfect brickwall filter) is a mathematically closed-form solution: you can write it out exactly without approximation. It will also fulfill your criteria for preserving the original samples, since it perfectly interpolates the original waveform. I'm probably missing something here ...

 

You're missing the implementation. How many theoretically perfect sinc interpolators have been implemented? None. 

 

Quote:
Real-time filters can only approximate this ideal, since an ideal sinc filter (aka rectangular filter) is non-causal and has an infinite delay, but it is commonly found in conceptual demonstrations or proofs, such as the sampling theorem and the Whittaker–Shannon interpolation formula.
 

From Wikipedia, yeah, I know, not the most reliable source, but you can confirm with other sources like Multirate Digital Signal Processing by Crochiere and Rabiner, or a decent overview is available online at http://www.labbookpages.co.uk/audio/firWindowing.html

 

I'll defer to Mike (baldr) if he wants to get into more detail, as I am not the z-domain expert at Schiit.

post #1361 of 14430
Quote:
Originally Posted by Jason Stoddard View Post
 

Yep, it is. 2 chassis, about 100 lbs or so of gear, per stereo ADC.

 

Maybe I can talk him into doing a modern version.

 

Don't get excited--if we do it, it will be STUPID expensive. I doubt if we'd replace many studio ADCs with it. It would really be for nutcases.

 

Ahhhhh yeeeeeeah.  I'd be so stoked.  I've been waiting to see as much progress on ADCs as on DACs.  You got the Benchmark ADC1 (which measures great and is probably fine but is fairly old), Lynx Hilo, Antelope etc.  I have an old Digidesign interface that ain't bad but I'd love to have something squeaky clean for vinyl transfers and the like.  Mytek is working on a new ADC according to the interview from this week over on Audiostream.

post #1362 of 14430
Take a look at the Ayre QA-9,it's supposed to be a great performer for vinyl transfers
post #1363 of 14430
I am interested to know how yggy would fair with an oversampling dac like auralic vega we're talking about a huge price difference and since implementation plays a big role while you say that R2R is better than sigma delta... I don't consider my vega dac an inferior product to anything I have heard to date. I have tried Antelope Zodiac Platinum which is even more expensive dac than the vega and couldn't exactly understand what I am hearing, so in that respect my vega is a superior product to the platinum which is made by a company that makes studio gear. You got me curious on that yggy dac and I wouldn't mind doing a DBT on both dacs via ragnarok.
post #1364 of 14430
Quote:
Originally Posted by XVampireX View Post

I am interested to know how yggy would fair with an oversampling dac like auralic vega we're talking about a huge price difference and since implementation plays a big role while you say that R2R is better than sigma delta... I don't consider my vega dac an inferior product to anything I have heard to date. I have tried Antelope Zodiac Platinum which is even more expensive dac than the vega and couldn't exactly understand what I am hearing, so in that respect my vega is a superior product to the platinum which is made by a company that makes studio gear. You got me curious on that yggy dac and I wouldn't mind doing a DBT on both dacs via ragnarok.

I am pretty sure you speak for us all with that statement. :D

post #1365 of 14430
Quote:
Originally Posted by Jason Stoddard View Post
 

 

You're missing the implementation. How many theoretically perfect sinc interpolators have been implemented? None. 

 

From Wikipedia, yeah, I know, not the most reliable source, but you can confirm with other sources like Multirate Digital Signal Processing by Crochiere and Rabiner, or a decent overview is available online at http://www.labbookpages.co.uk/audio/firWindowing.html

 

I'll defer to Mike (baldr) if he wants to get into more detail, as I am not the z-domain expert at Schiit.

 

Right, so we come back to the question of what bitperfectness gets you. A sinc interpolater's time domain window doesn't have to be infinite, because as soon as the artifacts due to the approximation fall sufficiently below the noise floor, it's about as good as perfect in the real world. Obviously, you get to determine how far below that noise floor is good enough for you. And the window can't be infinitely long anyway since all playback media is finite in length.

 

Now you may object to using an approximation, but if you look at the math of point-sampled systems, the only correct reconstruction filter is a sinc filter. Everything else is an approximation, so I'm curious what approximations you have chosen to live with, and why.

New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Jason Stoddard
Head-Fi.org › Forums › Head-Fi Special Forums › Head-Fi Bloggers › Jason Stoddard › Schiit Happened: The Story of the World's Most Improbable Start-Up