Schiit Yggdrasil V2 upgrade Technical Measurements
Jun 30, 2018 at 2:59 AM Post #61 of 203
Just to make sure anyone who's trying following this didn't miss it, I actually did detail the linearity test settings from my second set of linearity measurements near the bottom of this post (before the very last graph). To save you the time of looking for it, here's what I said about those settings:

By the way, @amirm, here's the bandwidth-limited setting I'm using in Fig.4 and Fig.6:
  • Sequence Mode-->Bandpass Level Sweep
  • Change the default Selectivity from 1/24 octave to Window width.
  • Go into Advanced Settings-->Signal Acquisition and Analysis-->Settling. Here, change Algorithm from Flat to Average, and change the Averaging Time from the default 200.0 ms to 1s.
That's it. I just timed myself with a stopwatch. It took just over six seconds to make the changes.

And here's a link to the project file which contains a linearity test of the Yggdrasil 2 from its balanced outputs (and balanced digital input) that I just ran:

https://www.dropbox.com/s/leqqie8v8zl4mhd/Project.approjx?dl=0

If you have the APx software to load this project file with, you'll see my notation for the measurement reads something like

"Ygg2_digi bal out_ana bal in_-140-0dBFS_stps100"
When describing outputs and inputs, I usually notate it from the standpoint of the analyzer. So in the above I'm using digital balanced out and analog balanced in (on the analyzer) -- which (on the Yggdrasil 2 would be digital balanced input and analog balanced output.
 

Attachments

  • APX5555 Linearity Test-2.png
    APX5555 Linearity Test-2.png
    47.1 KB · Views: 0
  • upload_2018-6-29_11-27-6.gif
    upload_2018-6-29_11-27-6.gif
    42 bytes · Views: 0
  • upload_2018-6-29_13-5-0.gif
    upload_2018-6-29_13-5-0.gif
    42 bytes · Views: 0
Jun 30, 2018 at 3:06 AM Post #62 of 203
didn't mean to start a discussion about how to cheaply host larger files, though. That's off-topic. I only wanted to relay the proclaimed reason.

Well yes and no. The reason I went there was because of this quote:

and one of his reasons not to publish it was that it's apparently quite a big file (50 MB or so?) and that the hosting costs are therefore noteworthy.

If the offered reason that he cannot share or disclose his config file is that it would be prohibitively expensive to do so, that's completely false IMO and I think I've shown that. I'm not an EE to go argue the AP unit measurement #s and methodology but I do have a little bit of IT experience.

As others have mentioned and I agree completely with. If there is any interest in reaching truth in this, then you have to describe your methodology in a way that others can reproduce the test conditions. This guy is evasive. On the other thread I learned about the term Sea-Lioning and that seems appropriate to me. In the pursuit of truth and mutual benefit of the community who are interested there should be openness and transparency. You remember in math class in school? You had to show your work to get any credit. Even if you had the correct answer, you have to demonstrate an understanding of the principles by working out the problem methodically to demonstrate that you understand how to use the tools and arrive at the conclusion. In science that opens you up for scrutiny. IMO there's no shame in being wrong. We'll all benefit and learn something from the discussion.
 
Jun 30, 2018 at 4:45 AM Post #63 of 203
Just to make sure anyone who's trying following this didn't miss it, I actually did detail the linearity test settings from my second set of linearity measurements near the bottom of this post (before the very last graph). To save you the time of looking for it, here's what I said about those settings:



And here's a link to the project file which contains a linearity test of the Yggdrasil 2 from its balanced outputs (and balanced digital input) that I just ran:

https://www.dropbox.com/s/leqqie8v8zl4mhd/Project.approjx?dl=0

If you have the APx software to load this project file with, you'll see my notation for the measurement reads something like

"Ygg2_digi bal out_ana bal in_-140-0dBFS_stps100"
When describing outputs and inputs, I usually notate it from the standpoint of the analyzer. So in the above I'm using digital balanced out and analog balanced in (on the analyzer) -- which (on the Yggdrasil 2 would be digital balanced input and analog balanced output.
Great work @jude

I sent @amirm a follow up PM pointing him to your post. It will be interesting to see how he takes up the challenge.
 
Last edited:
Jun 30, 2018 at 5:14 AM Post #64 of 203
...You assume that he does apply bandwidth limiting to achieve the optimized results. To me it looks like he's applying the inverse of a smooth version of the deviation of the analyzer's DAC. Does the APx555 have such a feature? Where you can define a mathematical formula to apply to the measurement result? The deviations from perfect linearity in his graph seem to correspond inversely to deviations from an idealized curve (regression, I suppose) approximating the sample points below -100 dB in his unoptimized result.

Maybe I'm completely wrong about this, but if so, he would apply the inverse of the analyzer's DAC imperfection to every measurement of other DACs (because he attributed them to the analyzer's measuring components, not its generating components).
[Edit: I don't know likely that is, given that he is clearly aware of that possibility when he says "What this means is that the AP has a positive error in "linearity" (It is more than that but let's go with it) at levels less than 100 dB. As such, you can not, let me repeat, NOT measure any DACs with it as you will be showing the sum total error of both the DAC and ADC measurement errors in the Audio Precision." That would be wonderfully ironic.]

But, again, just spitting out hunches.

@Alcophone, regarding the assumption that he's bandwidth-limiting, here's something @amirm posted on 2018-06-28, confirming that (a) his linearity measuring method does indeed involve the removal of noise, and (b) trying to suggest that doing that is what he's been saying to do all along (when anyone following this knows that is most certainly not the case):

amirm at audiosciencereview said:
I want to emphasize again that substantial amount of noise reduction exists in my measurements. Levels are attenuated by a whopping 50 dB on either side of the frequency of the generator. Without it, you just measure the noise, not the signal...

Again, in response to his 2018-06-08 criticism of our method which measured anything (noise and THD included) in the audioband, that's what I did in my second set of linearity measurements on 2018-06-15 (after having discussed the topic at length with the team at Audio Precision). Of course, @amirm criticized even those measurements roundly. And after someone pointed out to him that I had contacted AP to discuss this, he said:

amirm at audiosciencreview said:
Not quite. I explained how my measurements were made to AP folks after pointing out to Jude that he was doing it wrong. They then worked with Jude on an implementation even though I had told Jude that the right solution on APx555 requires much more research. I have explained all of this to AP folks this morning again and issues in what Jude has published.

In a nutshell, they have set up a test where all the distortions and aberrations of the DAC are wiped clean. And they then declare: "oh look, it is linear down to -130 db" or whatever. Well duh. Of course if you remove all the noise and distortion from a DAC, it then looks accurate. Why bother running such a test when in real usage of the DAC no such filtering exists. Talk about running off with measurements with no thought of what they really mean and what benefit and correlation there is with audibility.

On top of that Jude continues to block all of my posts on head-fi. So no way for me to convey this information there. This is not the way we converge to a consensus.

So my advice remains: please wait to draw any conclusions until I remeasure the device with my APx555. Until then, my data remains 100% valid in pointing out serious issues in the performance of Yggdrasil.

And let's not forget this gem:

amirm at audiosciencereview said:
...I can pull rank on Jude and even AP folks on my understanding of such topic. But I am not. So let's move on such tactics.

He then blamed me for asking AP the wrong questions:

amirm at audiosciencereview said:
No, they [Audio Precision] do know how to "set up the machine." Question is, what are they being asked to set up?

What they were asked to help setup was detection of level while eliminating all distortion and noise. This is NOT what we want. If the device creates X amount of noise and distortion on top of Y signal, we want to measure both.

What they needed to ask instead was how to replicate this measurement I made on AP2522 but on APx555

There he goes with his insistence that the 25-year-old SYS2522 is better suited to measuring this than the current flagship APx555, and that the only right way is to replicate that particular analyzer.

amirm at audiosciencereview said:
...The test I ran [on the Yggdrasil 2] takes into account distortion and noise and hence is able to differentiate between DACs easily. Theirs does not. I know because I replicated their method and it would no longer do anything useful.

Remember that statement, because it won't be long before I get back to that.

When asked if he's suggesting if he's the only one who knows how to do this properly, he responded (again, in that same post):

amirm at audiosciencereview said:
I am currently the only one who:

1. Owns both the 2522 and APx555 analyzers
2. Have a large body of results and many DACs on my bench to evaluate using both analyzers

It has taken me good bit of effort to replicate the way the 2522 measured linearity on APx555 with the above tools.

The 2522 has a cascade of analog analyzer and digital analyzer. The APx555 is only digital. So there is no 1:1 relationship between how the two run.

Are you having as hard a time keeping up with his statements as I am? I think that's his intent, because he knows most who read it there will take him at his word and not check it out for themselves. For those who might check it out for themselves, I think he intends to keep moving the shells around so that it's harder to follow. Anyway, here's a bit of summarizing:
  • He said the linearity test he ran and posted from his SYS2522 (the older analyzer) "takes into account distortion and noise and hence is able to differentiate between DACs easily. Theirs does not." This would suggest (along with his many other statements on the matter) that his SYS2522 was not bandpass filtering like my second set of linearity measurements.
  • He said that my second set of linearity measurements (and the subsequent supporting nested FFT's) -- regardless of who helped me, none of whom he'd have you believe are as qualified as him (which I think is a reasonable interpretation of his "pull rank" comment) -- were useless because of that bandpass filtering.
  • On 2018-06-08, he posted a loopback measurement from the APx555 that he says he had "taken me good bit of effort to replicate the way the 2522 measured linearity on APx555 with the above tools." Again, that loopback graph indicated likely bandpass filtering (which we now know to be true).
More quotes from @amirm:

When asked by one of the forum members at his site to explain what we were doing to "wipe out distortion and aberrations of the DAC" in response to @amirm's comments, he said:

amirm at audiosciencereview said:
Sure.

The problem is challenging. They are attempting to measure linearity down to -140 dB. As you and I both know, there is no DAC in the world that produces meaningful signal at anything close to those levels. But importantly, there is no ADC in any analyzer that can do the same. Yet they tried anyway based ironically on advice I gave to AP . That if you use aggressive filtering of noise and distortion, you can indeed eliminate a lot of variability.

So they did that and took that the N'th degree...

This suggests we're doing something wrong (of course) or deceiving (of course). He continued:

...We see that the filter has completely removed all traces of distortion and noise. So of course if you then measure this, it shows that the DAC is doing well.

But that is NOT what we hear out of the DAC. Nor what it electrically produced. We are cleaning up the output of the DAC and then measure it, then declare it a winner.

The "trick" here is to use only the filtering necessarily for the ADC to not have its noise and distortion profile be below that of the DAC under test. This can only be done through a bunch of trial and error which I went through on APx555 analyzer. My older 2522 "happened" to do this well out of box. I tried many things including changing the excitation signal, settling parameters for measurements, custom filtering, etc. I finally found something that while may not be identical to 2522, is very comparable.

Summary
Any filtering in the analyzer cleans both the DAC and ADC output. It is tempting to select an exceptionally narrow filter to get rid of all noise and distortion as to even show accurate values to -140 dB. But we know such data is fictitious as we don't know how to build such DACs. By carefully selecting the filtering and analyzer setting however, we can get reasonable results to about -120 dB. Any attempt to go beyond that in my testing will lead one into a ditch.

P.S. The FFT method is even a more extreme case of such filtering as there, you get to look at one individual spike and ignore all other noise and distortion characteristics.

It seems to me he's strongly suggesting we're attempting to deceive. He calls what we're doing a "trick." He calls it "fictitious." He says by posting my second set of linearity measurements and my later FFT plots (also made with the help of another measurement engineer) we "lead one into a ditch."

Note what he said about the FFT's in criticizing our methods as filtering out noise and distortion:

amirm at audiosciencereview said:
The FFT method is even a more extreme case of such filtering as there, you get to look at one individual spike and ignore all other noise and distortion characteristics.

Yes, @amirm, that was exactly the point. A point you later use yourself in this graph, in discussing the linearity of the Benchmark DAC3:

DAC3 FFT 200 Hz unfiltered.png


So, let me get this straight: When I used the FFT spectrum view to drill down on the test signal with the Yggdrasil 2 as another measure of linearity I'm being deceptive. But when you do the exact same thing (only three days later) to illustrate another measure of linearity with the Benchmark DAC 3 it's perfectly legit. I guess this is you pulling rank?

By the way, of that technique, @amirm said:

amirm at audiosciencereview with emphasis by me said:
We want to use time domain analysis here because AP runs in automated mode there. So we can make the 60+ measurements in linearity mode or whatever we like. Trying to do that in FFT manually gets very tedious. Hence the reason we are interested in proper filtering for measurements in time domain.

It's not that tedious, @amirm, and the switching of levels in FFT view can be automated. Try APx's Nesting feature.

Continuing...

When asked on his forums why his claims may stretch credibility, he responded:

amirm at audiosciencereview with emphasis by me said:
Why are you not skeptical of the motivation on behalf of Jude? You think he is going into this with just a search for the truth? If so, why ban me from responding to him? Or refusing to send me his AP project files?

There is only one truth here: he wants to help Schiit by invaliding my test results. Which is fine. But he needs to replicate my tests, not invent his own and say it still represents the same thing.

As others have pointed out, he was asked on his forum if he would post his project file (which he has repeatedly all but demanded I should share). Among the key reasons given for not producing project files was file size, stating that they can exceed 50MB. Even if it was 50MB in size...really? The project file I uploaded that includes the settings and one linearity measurement of the Yggdrasil 2 is 79.9K. If you do have a project file that contains large measurements (like 1.2M-point FFT's), then, yes, the project file can be quite large. If sharing your settings is the primary reason, then delete the measurements, and just keep the settings intact. The file should be quite small then.

But, again, even if it is 50MB...seriously? If you don't want to share it, that's fine. That's completely up to you. But don't throw up a silly excuse like file size as one of the main reasons.

And if you're going to insist I need to replicate your tests, wouldn't it help to have some idea how, especially with all of your suggestions that there's a lot of special sauce needed to to do a linearity measurement with the APx555? Yes, it was straight-away assumed you were bandwidth-limiting when you posted your optimized loopback. You're only finally now admitting that (a) it does bandwidth-limit, despite your vehement protestations when I did that, and (b) the results, as you've stated yourself, are the same (in your Benchmark DAC3 linearity example, which I also posted).

He continued:

amirm at audiosciencereview with emphasis by me said:
AP has done nothing wrong to apologize for. They have spent time and effort helping Jude do something he should have known at the start, and had the benefit of my explanation here. If Jude had asked AP to help him replicate my 2522, they would have given him different advice (although not clear they would have been able to give him ultimately what he wants).
Yes, Amir, we all have by now experienced the benefit of your explanation.

He's correct that I did not ask AP to replicate his 2522. I have an APx555 here, so I didn't see any reason to replicate the 2522. Even though I did not replicate the 2522 with the APx555 (and he apparently did), even he admitted we arrived at the same result. If you're having a hard time following all of this, again, I think that's exactly his objective -- because I think by now he realizes he's talked himself into a pickle. Bandwidth-limiting is deceptive when Jude does it. It's okay when Amir does it using a secret method imbued with decades of signal analysis experience. Oh, by the way -- same dang result.

When a few of his forum mates suggested that bandpass limiting does perhaps make sense for linearity measurements, he stuck to his guns (despite the fact that his loopback showing how it should be done on an APx555 does show bandpass limiting):

amirm at audiosciencereview said:
Two issues:

1. You are measuring what we don't hear. We hear the total signal.

2. There are devices that nail the response to the level we are measuring.

Linearity is the ultimate test of a DAC: that it has a straight line transfer function between input digital samples and output analog. That output analog must be definition include all contributions including noise and distortion.

Checking just the level after removing all noise and distortion is an academic exercise devoid of real world value.

Looking at that bolded part in the quote: In case you haven't figured it out after all this, that's exactly what he's doing, too.

When we're showing other measurements (be they FFT spectrum or THD, THD+N, and/or noise figures) then we have an idea where the DUT's floor and distortion is. We can show linearity plots that take that into account (which I did in the first ones), and we can also try to get to the very limit of the DAC's ability to linearly decode the signal at the lowest levels (which requires bandpass filtering if the DUT's ability to linearly decode reaches signal levels below the across-the-audioband noise levels of DUT itself and/or the analyzer).

What I'm getting at here is that I think @amirm is quite urgently trying to convince folks that only he is qualified to make these measurements, and anyone else's is merely a product of devious motives and/or should be dismissed immediately as unqualified, useless plebeian scribbles. So he'll discredit my measurement for taking noise and distortion through the audioband into account. And then when I post measurements that do not do that, he'll discredit those, too.

To me, he seems so set on establishing himself as the only guy qualified to makes these measurements that he'll "pull rank" on not just novices like me, but even very qualified measurement engineers (some of whom developed and built the insanely precise tools tools he and I both use). And he'll also create criteria as arbitrary as his +/- 0.1 dB error that must be met to pass muster -- like having to come to the table with no fewer than his years of (no doubt impressive) experience, or even a proviso as outlandish and madcap as stating that one must have experience with -- or be in possession of -- his particular model of 25-year-old Audio Precision analyzer (which is admittedly a fine tool to this day) in addition to the one we've got (which is perhaps the finest audio analyzer created to date).

Never mind the fact that after all this hullabaloo those fictitious, useless plebeian scribbles of mine -- even according to him now -- mirror his own results with the same analyzer.
 
Last edited:
Jun 30, 2018 at 5:23 PM Post #67 of 203
So after all this...why is anyone wasting time pm'ing this guy for the facts???

Alex

:deadhorse::deadhorse::deadhorse:

Simply to make sure that later claims of ignorance can be reviewed against salient facts.

I've watched this unfold from the sideline for some time and tried to reconcile the information overload from a generic science perspective. As I've alluded to before the characteristics of good science that I look for are:
- clarity of method
- objective reporting free of editorial using hyperbole and sensationalism
- consistent reporting, useful to avoid issues such as graph scale results concentration or dilution
- attention to detail with empirical technique, so that there's assurance that the stated method is applied
- reference to standards, preferably as published by a standards organisation
- review of experimental error, which is where instrument accuracy and what it has been calibrated against become relevant
- comparison with other workers in the field

I acknowledge that the reports in a web forum are probably never going to have enough supporting detail to satisfy the criteria for publication in an academic journal. Even so, the points above are what I refer to when answering the question of, "are the results I'm reading about credible?" How this sits with an individual is a matter of personal judgment.

Boiling all that down, it's not unreasonable to do an "odd man out" elimination. And the "odd man's" results here to my reading are from @amirm

Regarding the line about his old analyser, that @amirm admits is out of test, being a de facto benchmark test bed for a given test does not appear to stack up as far as the instrument vendor is concerned. AP have published technical note TN-110 Legacy Instrument Migration to APx to address translation of test protocols across to current equipment from legacy analysers.

If @amirm has devised some ground breaking testing technique it's useless being closeted away from other eyes that can give it peer review, whether that be from a basis of theory or practice. To highlight my point one example in the audio measurement field is the jitter test by Julian Dunn. He published in peer review journals, wrote a book and I suspect influenced the content of relevant testing standards.

I would point out that I was pleasantly surprised that @Jason Stoddard published a single test report. From his previous statements, I got the impression Schiit were never going there. I'd bet that without the contrary results published by @amirm we'd never have seen that report (so kudos to you @amirm ). Possibly, it also factored in to Schiit's decision to drop the coin required to get themselves a current generation, top shelf analyser. Dunno, just musing.

In any case, the ball's with @amirm and time will reveal what he has to offer.
 
Jun 30, 2018 at 5:31 PM Post #68 of 203
Amir has published new measurements using my unit. Later in the thread he also used Jude's project file. He's also offering to send his now to anyone who asks. I'm assuming a PM here would do the trick.
 
Jun 30, 2018 at 5:41 PM Post #69 of 203
Amir has published new measurements using my unit. Later in the thread he also used Jude's project file. He's also offering to send his now to anyone who asks. I'm assuming a PM here would do the trick.
Great!

I'm asking. Post it here, @amirm, along with the project files that have been requested ad nauseum.
 
Jun 30, 2018 at 6:06 PM Post #70 of 203
Sorry guys...with all the data that Jude has openly supplied and this other guys defensive stance from the get-go….we have all the data needed on this product IMO.
What on Gods Green Earth do we need to validate anything from this other source....

Jude working with AP and the real experts on this measuring device have spoken not pulled RANK and do the all bow before me....all the data is here and from atomicbob...

Its over for me.

Going back into the fire just doesn't make any sense here.

If you want to play audio ambassador fine, but its NOT needed in my opinion.

Again...for me its over. Back to listening with my Schitt….

Alex
 
Last edited:
Jun 30, 2018 at 9:21 PM Post #71 of 203
Amir has published new measurements using my unit. Later in the thread he also used Jude's project file. He's also offering to send his now to anyone who asks. I'm assuming a PM here would do the trick.

Alcophone, we'd love to have a look at it to see what's going on. When it's back, please contact me via PM or at jason@schiit.com and we'll get you a call tag to bring the Yggdrasil back to have a look at it and report what we find. I'll send you a loaner for the time we have it as well, so it won't be an inconvenience.
 
Schiit Audio Stay updated on Schiit Audio at their sponsor profile on Head-Fi.
 
https://www.facebook.com/Schiit/ http://www.schiit.com/
Jun 30, 2018 at 9:36 PM Post #72 of 203
I would point out that I was pleasantly surprised that @Jason Stoddard published a single test report. From his previous statements, I got the impression Schiit were never going there. I'd bet that without the contrary results published by @amirm we'd never have seen that report (so kudos to you @amirm ). Possibly, it also factored in to Schiit's decision to drop the coin required to get themselves a current generation, top shelf analyser. Dunno, just musing.

Actually, the decision to significantly step up our ATE (automated test equipment) roster and our capability to provide reports has been in play for some time--and has been documented in the chapters of Schiit Happened that I've published this year. To date, we have purchase 6 Avermetrics AverLabs for in-line test and one APx555 for development test. The reasons we've done so are covered in the Schiit Happened chapters, but, in summary:
  • More automated test was needed, especially for the 16-bit DAC line, since they are parallel input DACs and can "lose" a bit. This is similar to discrete R2R DACs that require 100% production testing, due to the complexity of the circuit (the loss of one resistor or driver could significantly affect results). In addition, automated testing significantly streamlines complex products that change frequency response, like Loki Mini and Mani.
  • Reporting from the Stanfords was, to be frank, not great. Getting something like a standard AP test PDF out of them was painful and extremely labor-intensive. AP has been courting us for years, and the maturity of their software and their reporting capabilities finally swayed us to get an APx555.
We are currently evaluating what standard test results to be provided with new product introductions, but when we do provide such results, they will be in the form of an APx555 PDF document, not single screen captures. This is important, because it will document all settings on the test and make the results easily replicable. In addition, each report will clearly state that the results will be from a representative production sample of the product, and that if you get different test results (and verify they are not spurious), we'd be happy to bring the product back to re-evaluate and share our results.

All of this is simply an extension of the internal changes we're making to improve quality and improve support. For example, if we can't replicate a customer's problem with a unit back for service, we now contact that customer, explain we cannot find a problem, and ask them for more details about their system to see if we can replicate the problem. We also provide Avermetrics and/or APx555 reports to the customer to verify operation is within specifications.

And again, as I said in my previous post, if you have any weird test results, subjective problems, or operational glitches, please contact info@schiit.com and we will get it taken care of.
 
Schiit Audio Stay updated on Schiit Audio at their sponsor profile on Head-Fi.
 
https://www.facebook.com/Schiit/ http://www.schiit.com/
Jun 30, 2018 at 9:56 PM Post #73 of 203
Alcophone, we'd love to have a look at it to see what's going on. When it's back, please contact me via PM or at jason@schiit.com and we'll get you a call tag to bring the Yggdrasil back to have a look at it and report what we find. I'll send you a loaner for the time we have it as well, so it won't be an inconvenience.
Hey Jason, happy to do so if it's of value to you!
@jude, do you want to take a look as well?

FWIW, I love my Yggy and have never had a better sounding DAC.
 
Jun 30, 2018 at 11:21 PM Post #74 of 203
Actually, the decision to significantly step up our ATE (automated test equipment) roster and our capability to provide reports has been in play for some time--and has been documented in the chapters of Schiit Happened that I've published this year. To date, we have purchase 6 Avermetrics AverLabs for in-line test and one APx555 for development test. The reasons we've done so are covered in the Schiit Happened chapters, but, in summary:
  • More automated test was needed, especially for the 16-bit DAC line, since they are parallel input DACs and can "lose" a bit. This is similar to discrete R2R DACs that require 100% production testing, due to the complexity of the circuit (the loss of one resistor or driver could significantly affect results). In addition, automated testing significantly streamlines complex products that change frequency response, like Loki Mini and Mani.
  • Reporting from the Stanfords was, to be frank, not great. Getting something like a standard AP test PDF out of them was painful and extremely labor-intensive. AP has been courting us for years, and the maturity of their software and their reporting capabilities finally swayed us to get an APx555.
We are currently evaluating what standard test results to be provided with new product introductions, but when we do provide such results, they will be in the form of an APx555 PDF document, not single screen captures. This is important, because it will document all settings on the test and make the results easily replicable. In addition, each report will clearly state that the results will be from a representative production sample of the product, and that if you get different test results (and verify they are not spurious), we'd be happy to bring the product back to re-evaluate and share our results.

All of this is simply an extension of the internal changes we're making to improve quality and improve support. For example, if we can't replicate a customer's problem with a unit back for service, we now contact that customer, explain we cannot find a problem, and ask them for more details about their system to see if we can replicate the problem. We also provide Avermetrics and/or APx555 reports to the customer to verify operation is within specifications.

And again, as I said in my previous post, if you have any weird test results, subjective problems, or operational glitches, please contact info@schiit.com and we will get it taken care of.
@Jason Stoddard thanks for the clarification. I was aware from Schiit Happens that you'd increased your instrumentation stable but I honestly thought you were planning to keep the data in house. Apologies for the misunderstanding.

I think that it's great that you're setting up a standard test report format. It speaks to repeatability, objective reporting and transparency. So, kudos to you.

Alcophone, we'd love to have a look at it to see what's going on. When it's back, please contact me via PM or at jason@schiit.com and we'll get you a call tag to bring the Yggdrasil back to have a look at it and report what we find. I'll send you a loaner for the time we have it as well, so it won't be an inconvenience.
All the best with what you find. Unless you're able to source this "custom filter" that @amirm refers to, you won't be able to see what he sees. And that's where there appears to be a fork in the road of transparency and objectivity.

amrim at ASR (with minor edit for readability said:
... the linearity results (..) are so easy to read and heavily disputed by others at this point. This is the only test that is kind of "custom." In order to get rid of noise and distortion contributions in both the analyzer and DAC, a heavy handed filter is used to narrowly filter the source frequency out of the digital generator (over USB).

It would be helpful for all, if the technical detail of this "custom" filter was posted publicly. Notably, @jude responded quickly to the challenge I offered around test files (so kudos there) that shows a willingness to be part of open communication. It would be ideal if this willingness was shared more widely.

However, you can only work to the level that people are prepared to share with you. @Alcophone deserves high praise for his efforts there because by offering a piece of equipment that can be tested by potentially @jude as well, there's one significant variable that can be ruled out as a source of contention. I appreciate the effort and desire you have to better understand the capability of the equipment you supply. I'll be interested to read whether it's a problem that doesn't exist or something else.
 
Jul 1, 2018 at 1:02 AM Post #75 of 203
Unless you're able to source this "custom filter" that @amirm refers to, you won't be able to see what he sees. And that's where there appears to be a fork in the road of transparency and objectivity

It's not a fork in the road. It's one insane loudmouth driving his truck through a corn field, thinking he's the second coming of Dale Earnhardt.
 
Last edited:

Users who are viewing this thread

Back
Top