Time-domain Transient Response Testing of RME ADI-2 DAC FS & Schiit Bifrost Multibit (scope captures)
Jul 28, 2021 at 5:32 PM Post #16 of 34
@_js_

Thank you for your thoughtful and thorough response. I don't think the number of trials nor the percentage of correct identifications is undue. In psychology setting an alpha below 90% would not be acceptable and given the arguably similar nature of the inquiry here, that threshold is quite reasonable. You need enough trials to reduce the likelihood of chance, but I am quite confident that you understand that quite well.

I would never advocate for short trials. I once conducted a blind listening, multiple trial, multiple subject test at a head-fi meet that I hosted. FYI none of the subjects could detect a 320mp3 file from the lossless master that it was made from. The same song was used, the same gear, the same volume and the song sample played for each play per trial (2 plays) was 2 minutes long and the exact section each time. That is what I would suggest. Use a quality track that you know very well, you would of course need a creative assistant who would be sure to vary the order of the condition as presented in the trials e.g. trial one dac 1, then dac 2, trial 2 dac 1 then dac 2, trial three dac 2 then dac 1 .... mixing it up to avoid any obvious pattern easy to do using Excel for instance.

You could break the 20 trials up if need be over a few sessions (although less than ideal) as long as you did so at the same time of day and same location. Really worthwhile. I will be shocked if there is an audible difference over the 20 trials. There is a reason none of the "golden eared" pro reviewers ever engage in such a publicly scrutinized experiment because they fear they will fail and no longer be able to make money or obtain social prestige from their celebrity.

You seem like a very open-minded person. I do hope that in time you have the opportunity to do this. It might be very eye-opening. Cheers.
 
Last edited:
Jul 28, 2021 at 6:13 PM Post #17 of 34
@Sonic Defender

I thought you meant multiple trials of 20 rounds each, e.g. like 60 or 80 or 100 rounds total. If just 20, then yes, that doesn't seem unreasonable.

But one single 2 minute section of a song isn't enough time. And yes, I have done this. Logic Pro actually has an ABX function built into it where you can try to tell the difference between the 256kbps AAC file and the 24/96 (or whatever) master file from which It was created. In this very forum (if memory serves), and in others, I've mentioned how difficult it is to tell the difference between them going back and forth within a single song, even over and over again. For years and years I listened to 256 AAC because I wanted most of my library on my iPod / iPhone, and that was the only way to get it all to fit. And before I ripped my entire library of CD's I wanted to make sure that 256AAC was good enough. And I determined that it was. At least for my PX-100's, anyway!

So I know what you are talking about.

But, over an hours+ long listening session (or over an evening), 256AAC is definitely more fatiguing than lossless. It is definitely subtle! No doubt! But there is a difference and it is significant. The lossy codecs are pretty brutal on the fidelity of the sound. The kind of small differences that are often pointed to among DACs and Amps and so on, those are absolutely dwarfed by how badly a lossy codec screws things up. If you haven't read it, this is a great discussion of it:

MP3 vs AAC vs FLAC vs CD

But even so, 2 minutes isn't enough time to hear it reliably.

I've participated in more than one very scientific and controlled blind listening test here at Sonos. We have special equipment for doing just this, in more than one listening room. I know what's involved. And I know the benefits of doing this sort of study.

However . . . just as big a component of "tuning" and dialing in the sound during product development here at Sonos comes from feedback from our Sound Board, which includes people like Rick Rubin and Giles Martin--recording engineers, and other industry professionals, as well as artists, etc. And we give them one of our products to take home and listen to over an extended period of time. You might say that's not "scientific", and maybe it's not. It's uncontrolled. Maybe they just don't like the form factor and their prejudice makes them hear it as sounding worse? Maybe they have a cold coming on and it's affecting their hearing, etc. Yeah, maybe, but we're not trying to convince a scientific audience or get published in a scientific journal, or meet some rigorous definition of what is evidence and what is not. We want to make great sounding products, and we look to the people who really know what great sound actually is, to help us meet that bar.

We take the input from these people and we are grateful for it. It's not the end all and be all, to be sure. I mean, we have multi-million dollar world-class anechoic chambers (both a 4pi and a 2pi), to accurately measure our products and our transducers, so that should tell you something.

All that said, I definitely feel like some of the "golden ears" people out there have gone way too far to the other side and aren't being very scientific or careful or, well, measured, in their listening, and those people could do with some (probably painful and embarrassing) experience with double blind listening!

(reminds me of a TV show I saw once where two people were having a wine tasting contest, and they asked a 3rd person to pour them both two glasses, labeled "A" and "B", and have them tell what wine it was, it's faults, it's merits, etc., and they both went on and on and were each convinced the other was totally wrong, and to settle the matter, they asked the 3rd person to reveal the identities of the wines, and he said "the bottle is on the counter in the pantry". Too funny!)

So . . . I don't know . . . we're probably more or less on the same page, just coming from opposite directions and contexts. I very much appreciate your thoughts and posts either way, though.

Cheers!
 
Jul 28, 2021 at 9:50 PM Post #18 of 34
@_js_ Absolutely,

It sounds like we are of a very similar mind around much of this. I would need some pretty solid first hand evidence of the fatigue from lossy formats theory, but I certainly can't discount it as I would be lying if I said that I had evidence to the contrary. And thank you as well, it is always pleasant to have an interesting and civil exchange of ideas.

I am looking forward to in the near future investing in a lifestyle speaker system for my upstairs and Sonos would most certainly be a candidate. At one time I had a little Kanto system with the desktop mains and a sub. It was actually quite pleasant and generally well designed and built. I have also had the pleasure, albeit sadly for very short listens to some Sonos systems and I know that they are next level. I appreciate design as a craft in anything, and I think lifestyle speaker systems can be a wonderful example of this.

Hopefully we get a chance to chat more in other threads as well although as of the last year I have been spending less time here and more over at ASR. That isn't because I like this amazing community less, I have simply become re-involved with speaker based listening again and it sounds like you spend time at ASR as well so you would know why that is a better fit for speaker based audio discussions. Keep well.
 
Jul 29, 2021 at 9:49 AM Post #19 of 34
A comment on a DAC comparison. It seems you do like a ladder type of sound. Follow your personal preference. You don't need to perform number of scientifically compliant tests to determine your personal preference. In this case a technology is different and a sound. Bifrost is on the scale of more natural sound, ADI-2 represent Delta-Sigma technology (most common) which doesn't exists without digital filters. ADI-2 use noise shaping, Bifrost do not, it is why a measured noise is much higher, you shouldn't worry about. Besides, it is a normal ladder switching noise. In the ladder technology of Bifrost 2, digital filters are optional, but designers didn't give a choice to disable it.

A tip. Look at the 1kHz square wave 192ks/s plot of Bifrost. It seems that on this frequency you have bypassed some of digital filter processing. A glitch seems to be caused by the analog section (not good). For the best sound quality I suggest to use software resampler on the PC and match the internal ladder frequency (which seems to be 192kHz). I use for testing Foobar2000 with SoX addon. It gives better quality than any onboard oversampling engine. Use an integer multiplier, which means 192kHz for 48k source and 176.4kHz for CD source.

At the moment there are two ways to reconstruct perfect analogue waveforms. Use upsampler to shift Nyquist images away of audio frequency. During upsampling you are actually filling up missing samples with guessed values. If you do it, make it right! Unfortunately all commercial implementations are limited by a processing power. The alternative method is to leave reconstruction to our brain. It is called NOS (non-oversampling) and no digital filtering. On the scope screen 1kHz NOS CD waveform will look stepped, and 7kHz will be completely messed up, but our brain is able to reconstruct everything from the raw (filterless) DAC output. Our brain also seems not affected by a presence of mirror images either. One things I must say straight. There is no folding Nyquist images to the audio band, unlike the aliasing during ADC process. However it can be intermodulation products on non-linearity of our analog chain. It is why the analogue section should be a good quality (class A amplifiers - the best and good transducers).

NOS DAC should be next step in your journey. :)
 
Last edited:
Jul 29, 2021 at 5:11 PM Post #20 of 34
A comment on a DAC comparison. It seems you do like a ladder type of sound. Follow your personal preference. You don't need to perform number of scientifically compliant tests to determine your personal preference. In this case a technology is different and a sound. Bifrost is on the scale of more natural sound, ADI-2 represent Delta-Sigma technology (most common) which doesn't exists without digital filters. ADI-2 use noise shaping, Bifrost do not, it is why a measured noise is much higher, you shouldn't worry about. Besides, it is a normal ladder switching noise. In the ladder technology of Bifrost 2, digital filters are optional, but designers didn't give a choice to disable it.

A tip. Look at the 1kHz square wave 192ks/s plot of Bifrost. It seems that on this frequency you have bypassed some of digital filter processing. A glitch seems to be caused by the analog section (not good). For the best sound quality I suggest to use software resampler on the PC and match the internal ladder frequency (which seems to be 192kHz). I use for testing Foobar2000 with SoX addon. It gives better quality than any onboard oversampling engine. Use an integer multiplier, which means 192kHz for 48k source and 176.4kHz for CD source.

At the moment there are two ways to reconstruct perfect analogue waveforms. Use upsampler to shift Nyquist images away of audio frequency. During upsampling you are actually filling up missing samples with guessed values. If you do it, make it right! Unfortunately all commercial implementations are limited by a processing power. The alternative method is to leave reconstruction to our brain. It is called NOS (non-oversampling) and no digital filtering. On the scope screen 1kHz NOS CD waveform will look stepped, and 7kHz will be completely messed up, but our brain is able to reconstruct everything from the raw (filterless) DAC output. Our brain also seems not affected by a presence of mirror images either. One things I must say straight. There is no folding Nyquist images to the audio band, unlike the aliasing during ADC process. However it can be intermodulation products on non-linearity of our analog chain. It is why the analogue section should be a good quality (class A amplifiers - the best and good transducers).

NOS DAC should be next step in your journey. :)

Hi sajunky! So, regarding the 192ks/s plot, I have not bypassed any DSP. There is no means to do that on the Bifrost, and on the ADI-2, the filter was still set to "SD Sharp" (in the original post). More importantly, I was using a program called "AudioTest" to generate bit-perfect 1kHz square waves at 192ks/s. That's what I wanted. If I had started with a 48kHz square wave and upsampled it, using some kind of DSP, I would have not gotten a perfect square wave going into the DAC's.

And great observation re: the noise issue! Thanks for that! I also had a thought, that maybe that small amount of noise actually helps me perceive a smoother and more natural sound!?! I remember reading that some early CD players actually injected a small amount of noise for this very reason (specifically, I know the Carver CD player did).

All,

I edited my first post to correct some of my ignorant and incorrect statements and to add a disclaimer at the very beginning, so I don't mislead or misinform anyone else! Hope it's enough.

OK, so I've been learning about minimum-phase and phase-linear DSP filters, and I can give the definition of them, in consequential and practical terms. A linear phase filter will have a phase-shift (delay) vs. frequency function that is a straight-line (but it will, in general, have a slope). The consequence of this is that the shape of a waveform that goes in will be preserved better than a minimum phase filter.

A minimum-phase filter will distort the wave shape more, as it has phase shift per frequency that varies in a non-linear way. But, its advantage is that you can design it so that there is no pre-ringing. Pre-ringing is more audible than post-ringing, as it turns out. So if there is a bit of ringing ahead of a step, for example, listeners won't hear the step as an ideal "thunk", but will hear a little bit of "chirp" along with it. So even though the wave gets distorted, the transient edges have a crisper onset, which is desirable to some degree.

I decided (unwisely) to try listening to square waves on the ADI-2 with the various filters. At 1kHz, I couldn't tell a difference (at least in the limited time I played with them) so I went down to 100Hz. At 100Hz, I'm pretty darn sure I can hear the difference between NOS and Slow filters. The NOS has more high-frequency harmonics, and I can hear them. I have a trained ear in that regard, from many many hours voicing and tuning pianos. It's definitely a small and subtle difference. (Also, dang, that did a number on my ears! I felt a little loopy and weird after just a couple minutes listening to square waves. Definitely not going to do that again!)

So, you may be wondering also, as I have been, why have a DSP filter on the DAC output at all? Well, the answer has to do with frequency response. If you take the stepped output of the DAC (mentioned above by sajunky), and apply just enough high frequency filtering to smooth out those steps, it turns out that the frequency response up near half the sampling rate (fs/2) takes quite a hit! And if you don't filter it all, you have a lot of high frequency content added due to the unnatural steps due to the quantized amplitude of the output. Hence all these DSP filters that have evolved, and hence oversampling, etc. If we sampled all our music at 192kHz to begin with, things would be a lot simpler, as fs/2 would be 96kHz, well well above the audio band.

But, it may well be that 44.1 or 48 is good enough, and that with all this advanced DSP and oversample and such that there is no audible benefit to 96 or 192.

I'm continuing to learn and will post more at a later time.

Cheers!
 
Last edited:
Jul 29, 2021 at 5:50 PM Post #21 of 34
I did say that on some DAC's chosing specific sampling rate source (usually the highest supported) force DAC to not oversample. As a consequence some filters may not work as specified which is good things. I know you didn't do anything specifically. :)

You can test with square wave, some specific sample rate do not produce ringing on the output, this gives a clue. BTW, digital filtering is not placed on the output, but before D/A conversion. It kills natural reverbations on a decay, most noticed on the piano notes or gong frequency transitions. You know how piano sounds, it will be evident to you what is wrong. ADI-2 lose these natural properties more than Bifrost. It is a big advantage of a ladder DAC.
 
Last edited:
Jul 29, 2021 at 6:14 PM Post #22 of 34
I did say that on some DAC's chosing specific sampling rate source (usually the highest supported) force DAC to not oversample. As a consequence some filters may not work as specified which is good things. I know you didn't do anything specifically. :)

You can test with square wave, some specific sample rate do not produce ringing on the output, this gives a clue. BTW, digital filtering is not placed on the output, but before D/A conversion. It kills natural reverbations on a decay, most noticed on the piano notes or gong frequency transitions. You know how piano sounds, it will be evident to you what is wrong. ADI-2 lose these natural properties more than Bifrost. It is a big advantage of a ladder DAC.

There is no way to select the sample rate on Bifrost. It automatically determines the sample rate of the incoming signal, and configures itself for the sample rate.

And yes, of course, digital filtering must be done in the digital domain! LOL! Um, yeah, how else would it work? I shouldn't have said "DAC" output, maybe. What I meant is on the output, generally. Like A/D is analog input, and D/A is analog back out (output). You need filtering on the input signal to avoid aliasing, as is well established and generally understood. It can be (and usually is) a combination of a digital and analog filters. But you don't need a filter on the sample-and-hold section of the input. The amplitude quantization of the analog input signal at discrete times, by a sample-and-hold section, isn't problematic, because when the sample-and-hold block releases the current value and samples the new value of the analog amplitude, that noise, that step, doesn't contribute to any noise or spectra problems.

On the output, however, those steps do cause problems, so designers generally feel that they must smooth them out. This smoothing out causes significant frequency drop up near 20kHz, if the sample rate is 44.1kHz. You can deal with that by using digital signal processing in addition to any analog component smoothing on the output. Together, these two can make for a ruler flat frequency response and a smooth output voltage with time. Conceptually, they are designed and thought of together. Hence my statement. And, this is all part of the digital-to-analog chip, typically.

On the ADI-2, with the Short-delay sharp filter, all sample rates would show that ringing after the rising edge. It's a consequence of the impulse response filter on the digital information coming in.

As for listening to piano sounds, THAT IS BRILLIANT! Thank you! Of course! I haven't listened to any piano-only pieces on the ADI-2 yet! Great idea. Love it. Because, yes, I very much do know what a piano sounds like, and what the decay of piano strings sounds like. I will definitely do this. Awesome suggestion!
 
Last edited:
Jan 13, 2022 at 9:19 PM Post #23 of 34
TIME-DOMAIN TRANSIENT RESPONSE TESTS OF ADI-2 DAC FS & BIFROST MB DAC’s

[Edit/]
So I've learned a ton since I first posted this, less than a week ago, so I am adding edits to this original post (but preserving all the text, not removing anything) so that my errors don't mislead anyone or cause people to draw false conclusions. The main thing this first post does is capture the DSP filter response of the RME ADI-2 DAC FS, with the filter set to "SD Sharp", and the Schiit Bifrost Multibit. Later in this thread I capture the other filters.[/Edit]


Recently, I decided to take my work headphone listening setup to the next level, and get better headphones and a DAC and an amp (or a DAC / Amp combo unit) to go with them, so I started digging into head-fi and online reviews and websites again, wondering excitedly what new stuff from my favorite (or exciting new) companies had come along while I had been listening contentedly to my Sonos PORT +TIDAL lossless --> Bifrost Multibit (Gen 1) --> Asgard 2 --> Audeze LCD-2 set up.

Of course, one of the first things I did was go to Schiit’s website to see what was on offer, and was thrilled to see a Bifrost 2 with an ever better multibit implementation!

Because, I have to say, of all my hi-fi audio purchases, the Bifrost Multibit was one of the best and most important. Finally, finally, I was experiencing the kind of audio bliss I had been seeking, and even from just my Senn 595’s. To my ears, there was such a lovely detail and air and cleanness to the music, without any of the edge and digital glare I had been experiencing, to one extent or another, for years. I had sometimes heard that the DAC really didn’t make much difference and that if you thought you were hearing significant differences that you were fooling yourself (and the same for amps, provided they had enough power to avoid clipping), but after years of improving my headphones, and my sound-file source quality, and trying higher powered amplifiers, and not really feeling like there was much of a difference, I was ready to take advantage of Schiit’s 15 day return policy and give a high(er) end DAC a try to see if that was the missing piece of the puzzle in my quest.

And, well, it was. Was it ever! My Bifrost 1 Multibit was a revelation to me. I was so happy with it!

So, I was pretty excited about Bifrost 2! And I figured I’d go balanced and get the Jotunheim 2 (all to go with the Audeze LCD-XC 2021 carbon’s I had on order). Woo hoo! Yea!

While I was waiting for my Schiit order to get fulfilled, I got the LCD-XC’s, and while I did love their sound on the whole, I personally found it a bit too hot in parts of the treble to be as non-fatiguing as I would like. So then I was also thinking about how to get some parametric EQ’ing into my setup as well, as I do love the LCD-XC’s and felt they just needed a couple tweaks.

And in the process of researching all this, I ran across the Audio Science Reviews of the Schiit Multibit DAC’s, and they gave me more than a little bit of pause! How could the Bifrost Multibit measure so badly! What was going on? Had I just not ever heard a truly great DAC? Or, I don't know, something?

So I dug deeper, and ran across the RME ADI-2 DAC FS, which did measure very, very well, and, it had not only a very highly regarded DAC section, but also, a very powerful, low distortion headphone amplifier, and a built in 5 band parametric EQ to boot! All for the same price as Bifrost 2 + Jotunheim 2. Plus, I could get it in just a couple days, instead of 6-8 weeks. So I cancelled my Schiit order, and pulled the trigger on the RME ADI-2 DAC FS.

I excitedly read the quick start section of the truly awesome manual, hooked it up, and started listening and playing with the EQ. Immediately I felt it was pretty good, at the very least, and I actually didn’t mind the interface, and I loved all the customizability. I spent a number of hours each day listening to music through it, into the LCD-XC’s, as well as some of my other headphones (but mostly the LCD-XC’s).

And . . .

For me--to my ears--it just wasn’t as good as the Bifrost. It was good, no doubt. Just not as good. For me.

Even with EQ helping things out I still preferred the listening experience through my Schiit stack. (And no, it wasn’t due to volume differences, I don't think.)

To my ears, the ADI-2 just didn’t sound as good. It was more fatiguing, and less enjoyable, less realistic on acoustic music, flatter, and astonishingly, somehow also less detailed?!? How could that be?!? It measured so much better than the Bifost! What was going on?

I started with the assumption that the ASR measurements just weren’t telling the whole story and that my ears were picking up on the rest of the story, so to speak. And quickly from there, I realized, that, indeed it was possible that there was more to the story. Those reviews were frequency domain measurements almost entirely. I mean, yes, OK, jitter, but what about transients? What about, say, a square wave? I felt that there was maybe something to this—or rather that it was a good place to start to investigate things at least. It was significant to me that I found the differences between these two DAC’s most apparent on acoustic music.

So, start simple. Use a tone generator program to make digitally perfect square, saw, pulse, and triangle waves and feed them into both DAC’s to see if there would be differences, and to look not on a spectrum analyzer, where you see frequency vs dB, but on an oscilloscope where you see voltage vs time. Maybe then I could start to see the differences that I thought I was hearing? I felt it was probably a long shot, but I still wanted to do the work to find out.

But honestly, I really didn’t expect to see a lot of difference. Not anything close to the differences I did find. And I didn’t expect the ADI-2 to deviate as much as it does from the ideal of the input wave. [Edit] The ADI-2's Short-Delay Sharp filter is what is called a minimum-phase filter, which can be thought of as a minimum-delay filter. It does distort wave-shapes that go into it, but, it's main advantages are two-fold: it has no pre-ringing ahead of an impulse or step, and it decays faster than a linear-phase filter (which does a better job of preserving wave-shape, but does have pre-ringing). The ADI-2 has many other filters, however, and if you want, you can change them--see follow up posts for more information[/Edit]. I couldn’t believe it when I first saw the ADI-2’s output, of a square wave input, on my oscilloscope! I thought for sure there was something wrong! That I was over-driving it or that the 1Mega-Ohm impedance of the scope was the problem. I had fed it a -6dbFS (6 decibels down from digital full-scale) square wave, at 96kHz sample rate, which I had thought was enough headroom, but I immediately changed that to -12dbFS. No change. Then I consulted with one of the other engineers here at Sonos, who is one of the people who designs the amplifier sections of our players, and had been involved with the line-out section of PORT. He suggested a 10kOhm load instead of going directly into the scope.

So, I changed my setup and soldered 10kOhm 1/4 watt resistors across the ends of the left and right RCA cables, and used a 1GHz, 1MegaOhm, 1pF N2795A Keysight active differential probe (so as to avoid any potential for ground loops), to measure the voltage across the resistor into the Keysight DSO404A 4GHz 20GSa/s oscilloscope.

But the results didn't change.

Apparently what I was seeing from the ADI-2 wasn’t due to overdriving or impedance mismatching or clipping. I now suspect that it isn't a bug. I suspect it is part of a feature. I suspect that the AKM DAC chip is limiting the impulse response due to some kind of Finite Impulse Response filter (like the Parks-MacClellan filter, perhaps). But I’m getting ahead of myself. Backing up . . .

Here is what I saw, here is what I was getting from it:

adi-2-dac-96-1khz-square-fit.jpg

To me, this looks like fairly bad amplifier ringing! (Here’s an example of an amplifier ringing like that, for reference: PS Audio Transients 2

I increased the sample rate to 192kHz, and things got a little bit better, but not very much better:

adi-2-dac-192-1khz-square-fit.jpg

Now I was really curious to see how the Bifrost Multibit would fare! Would it be just as bad? Marginally better? Worse?

Turns out, it was definitely better! [Edit/] Not actually better or worse, necessarily, just different than ADI-2's SD Sharp. But if you change the ADI-2's filter to Sharp, or SD LD you get something pretty close to the Bifrost[/Edit]

bifrost-96-1khz-square.jpeg

And let’s just compare that against the ADI-2 with the same voltage / divisions scale on the scope to make things fair:

adi-2-dac-96-1khz-square.jpg

The ringing on the ADI is so large it goes off scale. The ringing is about 40 percent of the entire amplitude of the square wave. And note how it is not symmetrical. It is not reversible in time, like the original signal that went in.

Not so with the Bifrost. It maintains very good time-symmetry. [Edit/]This is just a consequence of the linear-phase nature of the filter used in the Bifrost, and the ADI-2 can use a similar filter.[/Edit] The signal could be reversed in time and be pretty much the same. There are slight differences, but this is to be expected because we are asking the output stage to have such a large skew (voltage / time) rate and then stop on a dime.

The Bifrost also maintains better frequency domain fidelity to the input signal than the ADI-2. How do I know that without showing a spectrum analyzer output (or FFT analysis—which I plan on doing, btw)?

Well, what we are seeing with the ADI-2 DAC, is definitely not the Fourier components that fall within the 1/2 sample-rate bandwith (or even just the audio bandwith), leaving out all of the higher-order ones. A square wave has only odd harmonics of the sine-wave function. So we would have 1kHz, 3kHz, 5, kHz, 7kHz, 9kHz, 11kHz, 13kHz, 15kHz, 17kHz, and 19kHz, at least. That is 10 partials, which should make a wave that looks very much like the Bifrost’s output, and not very much like the ADI-2’s. If you check out this Wolfram Mathworld link, you can see a square wave get approximated by more and more partials, shown in different colors on the graph. There are only five partials shown, so 10 would get you a lot closer to an ideal square wave than what is show in the last sum of partials here, but you can already see where it is going, and it’s not going towards the ADI-2’s output. It’s going towards the Bifrost’s.

So, if the Bifrost Multibit is this good at 96kHz, what about at 192kHz? See for yourself:

bifrost-192-1khz-square.jpg

Look at that thing. Other than a little bit of overshoot at the leading edge and a finite slope on the rising and falling traces, it’s nearly a perfect square wave!

What about going down in sample rate? Well, here are images of ADI-2 and Bifrost at 48kHz and 44.1kHz:

adi-2-dac-48-1khz-square.jpg

bifrost-48-1khz-square.jpeg

adi-2-dac-44p1-1khz-square.jpg

bifrost-44p1-1khz-square.jpeg

Again we see the same differences. With the Bifrost, there is a lot less flatness on top and bottom, but it’s still doing a pretty good job of approximating a square wave. The ADI-2, on the other hand, still looks like a badly ringing amplifier stage that was fed a square wave, and the ringing lasts longer, and is at a lower frequency.

So, let’s up the ante here and push things even more. Let’s do a 12kHz square wave at 96kHz sample rate into both and see what happens. That’s only 8 samples per entire waveform. Not much to go on! How well will the reconstruction filters work on both DAC’s? Let’s see:

adi-2-dac-96-12khz-square-no-offset.jpg

bifrost-96-12khz-square.jpg

What is going on with the ADI-2 now? That’s not even symmetrical about ground! It’s not even symmetrical flipped around it’s midline! It’s off-scale on the bottom and not on top, and looks very different bottom vs top.

The Bifrost, on the other hand, seems to me to be doing a very good job of intelligently filling in the gaps between the 8 points it was given. It comes up quickly, goes through the first one at the top (and keeps going a bit beyond), then turns around and heads down through the 2nd one at the top, up through the 3rd, then down through the fourth, all the way to the first point (fifth total) on the bottom, and similarly up and down through the others, finishing off by going up through the final fourth bottom (eighth total) point, and starting the waveform again.

This is what Schiit means, I think, when they say that the original samples are “retained”. The reconstruction filter works around and through those points to do the best interpolation it can do, optimizing not just for the frequency domain but also for the time domain.

And certainly the ADI-2 is not preserving the original samples here. If it were, it wouldn’t be so asymmetrical. Here it is again with the whole waveform on the screen:

adi-2-dac-96-12khz-square.jpg

I honestly have no idea what it’s doing here or why. Perhaps someone who has studied all the various filters out there can tell us? But whatever it’s doing, it’s not being faithful to the original 8 samples per waveform it was fed.

Let’s relax things just a little bit and see what happens with a 1kHz pulse waveform at 10 percent duty cycle. So, basically a 10kHz-width pulse up, baseline, 10kHz width pulse down, back to baseline, repeating 1,000 times per second:

adi-2-dac-96-1khz-pulse.jpg

bifrost-96-1khz-pulse.jpg

So, thankfully, the ADI-2 is doing something sane at least, even if there is the same ringing we saw before, but again, the Bifrost is being more faithful to the signal that went in. There's not much ringing, there is more evenness on top of the pulses, and tighter control.

OK, enough of things that are square. Let’s move on to something a lot easier, we hope, for a DAC to handle. A triangle wave:

adi-2-dac-96-1khz-triangle.jpg

bifrost-96-1khz-triangle.jpg

Both DAC’s are doing a nice job here, but you can see that the ADI-2 looks cleaner. And well, it is. This DAC has less noise than the instrumentation setup I was using! It’s a stellar piece of engineering, and a lot of thought went into its design and build. Here is the inherent noise of the ADI-2 as seen when connected directly to the oscilloscope:

adi-2-dac-noisefloor.jpeg

It’s like a mV or so, and that is probably just due to noise pickup on the RCA cables. Awesome performance! How does the Bifrost fare on this? Well, not great. You can already see from the width of the line on the triangle that it has more noise, but this really shows you how much:

bifrost-noise-floor.jpeg

About 5-10 times more noise. And this is into the 1Meg load of the oscilloscope. Things are worse against 10kOhm, and when trying to reproduce an actual signal. Here is both DAC’s fed a -48dbFS 1kHz sine wave:

adi-2-dac-96-1khz-sine-minus48dbfs-noisefloor.jpg

bifrost-96-1khz-sine-minus48dbfs-noisefloor.jpg

Clean and clear showing by the ADI-2, but look at all that noise in the Bifrost! You can still clearly see (and hear) a 1kHz sine wave, but there’s a lot of noise around it. Not great. But clearly not the whole story!

So, again, this is another reason why I suspect that the performance of the ADI-2 against square waves and pulses is due to the design of the whole digital-to-analog conversion system and not due to some failing of the ability of the components or circuits. This is a fabulous piece of kit! So I suspect, honestly, that it is baked into the sigma-delta AKM chip, and couldn’t be taken out even if RME tried. [Edit/] THIS IS TOTALLY WRONG! Not only could RME take this out, they did--you can take it out with the turn of a knob, even to the point of almost entirely taking out the output reconstruction filter, if you want (called "NOS"). Apologies for my ignorance of basic DSP![/Edit]. I doubt that Schiit would go to all the trouble to take a DAC chip that is not meant for audio at all and design all the support circuits and stages around it to make it fulfil that role if they could have just used an off-the-shelf AKM chip!

In any case, let’s get back to transient response testing. How about a sawtooth wave? Let’s see how the two DAC’s will fare!

adi-2-dac-96-1khz-saw.jpg

bifrost-96-1khz-saw.jpg

Ouch! Again, I have to ask, what is going on with the ADI-2?

The midpoint is offset by almost four tenths of a volt, and it’s definitely not symmetrical and has got significant ringing going on. Bifrost, by contrast, has significantly less ringing, is symmetrical, and is centered on 0 volts across the RCA outputs. Pretty darn good showing! Even better than with the square wave, I would say.

Finally, let’s make sure that both DAC’s give us a sine wave output when fed a square wave at 1/4 the sample rate frequency. So both here are at 48kHz sample rate, and are being fed a 12kHz “square” wave.

adi-2-dac-48-12kHz-minus6dbfs-square-wave.jpg

bifrost-48-12kHz-minus12dbfs-square.jpg

Ignore the voltage scale, I took one capture at -6dbFS and the other at -12dbFS and I had the volume turned down on the ADI-2—this was early on, and was a sanity check—but what you can see is that both are giving us sine waves despite being fed a square wave.

One way to explain that is to go back to the Fourier analysis and look at the partials, and see that 12kHz is the fundamental so the first overtone is 36kHz, which is greater than 1/2 the sample rate (thus would violate the Nyquist criteria and be folded back into the spectrum at a lower frequency than the fundmental), and so it is truncated as are all higher order harmonics.

But, this is kind of a misguided way to look at this, honestly. The DAC does not know what kind of wave was behind the samples that it is getting! It only sees the samples.

And when you have only four samples per waveform, what kind of information do you have, really? With that little information, you can’t differentiate between a square, a sine, or a triangle wave! So the best thing for the DAC to do is to reconstruct a sine wave, so as not to introduce any nasty higher order harmonics that weren’t there in the original music. Better to leave out something, than to create something that was never there.

This is the whole reason for the reconstruction / interpolation filter of a DAC (well, that and quantization noise). It’s because, honestly, our sample rates are probably too low! It’s easy to claim that our hearing kind of sucks up that high and there’s no loss of information, but I do not believe that that is a proven fact, for a number of reasons.

But either way you fall on that topic, right now, for better or worse, we’re stuck with most of our music being at 44.1 or 48 or, if we’re lucky, 96kHz, so we need our DAC’s to be absolutely as faithful to the samples that we do have as possible, while at the same time intelligently interpolating between them so as not to introduce terrible sounding and unmusical harmonics.

And from what I've seen so far, it is my opinion that the Schiit combined time-and-frequency domain filter seems to be doing a really great job at this. And that, I suspect, is the reason why the Bifrost Multibit sounds so good to my ears. Yes, the THD is worse than the ADI-2. Yes, the jitter is worse. Yes, the noise floor is worse. But all that is fairly picayune compared to what is better, to what it is getting right, which is huge.

[Edit/]The Schiit proprietary filter seems to be doing pretty much what a linear phase filter does, and what the ADI-2 "sharp" filter does, so at this point I can't say that it is special nor that it is doing a better job than the ADI-2 with the sharp filter in place. I'm in the process of listening to the ADI-2 with the other filters, and will update this first post, but at the moment, I think I do like it better with "sharp" instead of "SD Sharp". Not sure how it will stack up, for me personally, against the Bifrost Multibit, but I will continue to listen[/Edit]

OK, well I have a lot more to do around this whole subject, and I plan on measuring some more DAC’s, and on learning more about DAC’s and giving all this a lot more thought, but I was excited to share what I have so far.

And, excited to learn. If this has already been done; if anyone can shed more light on this; if anyone can suggest next steps to take; please do chime in! Constructive feedback is welcome! And I haven’t spent much time (or really any time) here over the past five years or so, and I didn’t even do a lot of searching (sorry) before posting this thread. So if I’m missing stuff, please pardon my ignorance.

OK, I will leave it here for now. I hope some readers find this interesting! Cheers!
Delta sigmas get very good measurements but when listening to them I hear digital glare and not enjoyable to listen. I agree with you schiit multibit measure poorly yet very enjoyable to listen. Their secret sauce of frequency and most importantly time domain are optimization to gives music the width and depth that is truly incredible.
 
Aug 6, 2023 at 9:19 AM Post #24 of 34
I wanted to post here cause this is quite an interesting topic, and also one that confuses a lot of people. There's a fair number of misconceptions surrounding reconstruction, time domain performance etc.

Just to be clear though, not at all posting in an argumentative fashion! Getting into testing/measuring stuff and understanding more of how things work is something we should be encouraging not discouraging like certain other forums!
More of this sort of stuff is good, I'm just joining in the discussion cause it's an interesting discussion and there's a few points people may not know.
For me--to my ears--it just wasn’t as good as the Bifrost. It was good, no doubt. Just not as good. For me.

Even with EQ helping things out I still preferred the listening experience through my Schiit stack. (And no, it wasn’t due to volume differences, I don't think.)

To my ears, the ADI-2 just didn’t sound as good. It was more fatiguing, and less enjoyable, less realistic on acoustic music, flatter, and astonishingly, somehow also less detailed?!? How could that be?!? It measured so much better than the Bifost! What was going on?
FWIW I had subjectively exactly the same experience.
I mean, yes, OK, jitter,
So this is the first thing to check on a DAC when it comes to time domain performance. Feeding it data that is accurate in the time domain and assuming it can reproduce it without affecting time domain performance is all well and good, but pointless if the DAC itself has an inaccurate timing reference.

To check this we effectively turn the DAC into a clock divider by playing a signal that is at exactly 1/4 the sample rate. So 11025khz for 44.1khz or 12khz for 48khz. (Note: The proper J-Test signal is more complex but that's for reasons relating to how AES/SPDIF works and is outside the scope of this post).
By asking a DAC running on a 12.288Mhz clock (256 x 48khz) to play a 12khz signal, we are effectively turning it into a 1024x clock divider. And see how clean the output is.

1691327385714.png


Any 'deterministic' (ie: repeatable/correlated) jitter components show up as small spikes either side of the main signal, and we also want to make sure the 'stem' is thin and precise. Some devices will have more of a 'spreading out' near the bottom which indicates the presence of higher levels of random phase noise.

1691327507545.png



So, start simple. Use a tone generator program to make digitally perfect square, saw, pulse, and triangle waves and feed them into both DAC’s to see if there would be differences, and to look not on a spectrum analyzer, where you see frequency vs dB, but on an oscilloscope where you see voltage vs time. Maybe then I could start to see the differences that I thought I was hearing? I felt it was probably a long shot, but I still wanted to do the work to find out.
This is where most people have a bit of a misunderstanding. DACs work based on Nyquist reconstruction, and unless you have an infinite sample rate it's not possible to have a 'perfect' square wave. In fact, if you are outputting a 'perfect' square wave from a sample rate limited DAC, something is wrong!

Nyquist theorem states we can perfectly reconstruct the original analog waveform with a bandwidth of up to half the sampling rate IF we perfectly band limit. ie: use a filter to remove any erroneous content above half the sampling rate.

A 'true' square wave requires infinite bandwidth and therefore generating a square wave from 44.1khz or any other lower sample rate PCM format requires an improper filter, which will then cause other issues.

The simplest and most important aspect to remember is that ringing is the result of a filter removing 'illegal' content, and it SHOULD be there. Absence of ringing means the filter is ineffective and will cause other issues. Additionally, ringing only occurs in the presence of an illegal signal and thus with most real music it simply doesn't happen. The only way to have no ringing is to allow all the erroneous content to remain, which is bad, and negatively impacts time domain performance.
A square wave as described using individual sines can be shown most elegantly in the gif below:
Fourier_series_square_wave_circles_animation.gif


We can actually see this in practice by doing things backward, recording a real analog square wave with an ADC and looking at how it changes depending on the sample rate we set the ADC to.

For example, here's what it looks like if we output an analog generated 1khz square wave, and observe it with something like a high bandwidth oscilloscope or an ADC with very high bandwidth, in this case the ADC has a sample rate of 2.5Mhz for an effective bandwidth of about 1.25Mhz:

1691324545680.png



So what happens if we keep the same analog signal, but just reduce the bandwidth of the ADC so that it's only capturing the first say 96khz (192khz sample rate)?

1691324831406.png


Now we are starting to see some 'ringing' on the square wave. Even though it doesn't exist on the actual signal. This is simply because the ADC filter is as the name implies, 'filtering out' content above 96khz and therefore we have a reduced bandwidth to describe the square wave. We can describe pure DC/flat signals with any sample rate, but we cannot describe that instantaneous rise up/down at the start/end of a square wave because that exceeds the bandwidth allowed by our sample rate.

If we go down to 44.1khz it becomes more drastic, and the sloping of the square wave edges becomes more clear:

1691324957832.png


And in fact if we go down even further, to a sample rate where not even the first odd harmonic is captured (4khz sample rate) then it just shows as a sine wave:
1691325072025.png


The truth of the matter isn't that the square wave is changing, it's exactly the same, it's just that when capturing with a limited bandwidth you cannot reproduce all of the components required to have a 'non ringing' square wave.
With 44.1khz info, the image below IS a perfect square wave. The ringing is there simply because neither the DAC nor ADC are 'making up' stuff above 22.05khz.

1691325453345.png


So what about DACs then? Surely being able to produce a square wave is good right? Well, no. Because as mentioned, if you're playing from 44.1khz info, then it means your DAC is basically making up stuff above 22.05khz.
Let's take the Schiit Bifrost 2/64 for example as this has a NOS or 'non oversampling' mode which means there is no digital filter applied to the input data.

If we play a 44.1khz signal that describes a square wave, this is the result in NOS:
AudioPrecision.APx500_jWHAYeXxHi.png


It LOOKS like a very close representation of a square wave right? Well yeah, but remember, none of those components above 22.05khz should actually be there. They are there simply because of the sample-and-hold/unfiltered nature of NOS playback.
And so we can see how ineffective this filtering is by playing something like whitenoise at 44.1khz and seeing how much 'extra' stuff shows up above 22.05khz.

1691326108390.png


Quite a lot! Remember, a perfectly reconstructed 44.1khz white noise output should look something like this, with instant attenuation of everything above 22.05khz:

1691326148265.png



And we can also see how unfiltered playbacked causes problems when we look at other signal types. Take a regular old 15khz sine wave for example:
1691326210592.png


The 15khz sine is there, but so is a huge amount of unwanted imaged content caused by the lack of an effective filter.
If we put the DAC back into OS mode so that the filter properly removes these images then we get this:

1691326279570.png


Some normal distortion products as would exist on any DAC, but the signal itself is just the intended 15khz sine and nothing else. Resulting in the proper sine output.

Time domain performance has not been worsened, it has been enhanced!
So in terms of how to get the best possible time domain performance, we need to remember that the time domain and frequency domain are inherently linked. So we want to have as much of the 'intended' signal as possible (ie: as close to 22.05khz as we can possibly get), but absolutely nothing else.

If we have erroneous content remaining above 22.05khz, that actually is interfering with and degrading the time domain performance. In fact we can even see this on the jitter test. If we run it NOS then the extra content above 22.05khz that should not be there interferes and so we see a poorer result:
1691327835440.png

Whereas if we have a proper filter, it's fine:
1691327870432.png


But if we attenuate/remove stuff UNDER 22.05khz, we also are losing timing info that SHOULD be included. So DACs with filters that roll off before 22.05khz are also throwing away timing info.

The solution is to implement a filter that has no attenuation of anything under 22.05khz where possible, but fully attenuates and as steeply as possible at 22.05khz. This however requires a fair bit of computing power and so only a few DACs have implemented 'high performance' reconstruction filters. Chord and Ferrum being examples. But you can take things even further with software like HQPlayer and PGGB
 
Aug 6, 2023 at 9:39 AM Post #25 of 34
This should not manifest itself audibly in any meaningful way correct? I understand that there are phenomena that can be measured, conceptualized and theorized about, but in a practical sense if the effects are really not audible these are just in essence, academic discussions.

And do not take that to mean that I see no value in such discussion, I absolutely do, but again, in terms of actual effect while listening, is anyone suggesting such considerations are audibly obvious?
 
Aug 6, 2023 at 10:07 AM Post #26 of 34
This should not manifest itself audibly in any meaningful way correct? I understand that there are phenomena that can be measured, conceptualized and theorized about, but in a practical sense if the effects are really not audible these are just in essence, academic discussions.

And do not take that to mean that I see no value in such discussion, I absolutely do, but again, in terms of actual effect while listening, is anyone suggesting such considerations are audibly obvious?
Depends on what specifically you're referring to.

Right now, research on audibility of jitter and other time domain aspects are slightly inconclusive, with different studies coming to different conclusions, and in fact the main study on jitter itself that is usually pointed to using only 9 subjects and with some areas of the methodology that could definitely be improved upon.
(As well as possible things such as more modern DACs and their improved noise shapers potentially allowing for a lower audibility threshold for jitter).

But also, research seems to indicate that whilst we indeed as commonly understood cannot hear frequency domain content above 20khz, our auditory system does not abide by the fourier uncertainty principle (meaning our ability to perceive time domain differences is not directly linked to the frequency domain bandwidth required to describe them) and we may be able to hear time domain differences that would require a bandwidth of around 80khz to describe. (Some have even reported differences of as low as 6.9uS which would require a 144khz bandwidth to accurately describe )

Some of these studies (and in fact the 1998 jitter study) also state that trained/practiced listeners achieved better results than untrained listeners and so it's also not out of the question that the ability to discern smaller time domain differences is an ability that can be learned.

As a result, whilst we cannot hear the frequency domain content of stuff above 20khz, there is evidence to suggest that the time domain impact of the presence of incorrect content above 20khz, (or indeed the lack of content at all given as we primarily use 44.1khz for music) is indeed audible.
A meta-analysis of studies conducted on this topic can be found here and concludes:

"Results showed a small but statistically significant ability of test subjects to discriminate high resolution content, and this effect increased dramatically when test subjects received extensive training. This result was verified by a sensitivity analysis exploring different choices for the chosen studies and different analysis approaches. Potential biases in studies, effect of test methodology, experimental design, and choice of stimuli were also investigated. The overall conclusion is that the perceived fidelity of an audio recording and playback chain can be affected by operating beyond conventional resolution."
 
Last edited:
Aug 6, 2023 at 10:48 AM Post #27 of 34
But if we attenuate/remove stuff UNDER 22.05khz, we also are losing timing info that SHOULD be included. So DACs with filters that roll off before 22.05khz are also throwing away timing info.

The solution is to implement a filter that has no attenuation of anything under 22.05khz where possible, but fully attenuates and as steeply as possible at 22.05khz. This however requires a fair bit of computing power and so only a few DACs have implemented 'high performance' reconstruction filters. Chord and Ferrum being examples. But you can take things even further with software like HQPlayer and PGGB
Skipping a tutorial to the bottom, as assuming all recording content is 'legal' according to the sampling theory, we don't need any additional filtering during decoding. Filtering is only needed during oversampling process. It shouldn't be any talking about bandwith limitation and other stuff when we decode what is coming and we are happy what we hear.

It doesn't matter how much garbage we see on the FfT plot when it sounds good.
:)

A note to the quoted part. We got it already, it is our past. A filter with 22.05kHz cut off frequency is called in scientific literature a "half band" filter. It was implemented in a Philips SAA7220 digital filter chip (and deliveratives) and was responsible for a 'vintage' sound for many years. The only reason for chosing a half sampling frequency is that the same stop-band attenuation could be achieved using half number of taps.

Actually it is better to start brick wall filtering around 20kHz, so at a half band frequency a maximum attenuation is already reached. It means that scientists do not really worry about timing accuracy but other things or such statement (highlighted) is not true.
 
Aug 6, 2023 at 10:59 AM Post #28 of 34
Skipping a tutorial to the bottom, as assuming all recording content is 'legal' according to the sampling theory, we don't need any additional filtering during decoding
Yes we do, the presence of ringing and the presence of imaged/aliased content are separate issues.
See the 15khz sine above for an example. Perfectly legal content but still needs a proper reconstruction filter else the output won't be correct.

It doesn't matter how much garbage we see on the FfT plot when it sounds good.
True. Lots of people like NOS or slower filters, that's totally fine.
What one subjectively prefers is upto the listener. But we were discussing time domain accuracy in this post

Actually it is better to start brick wall filtering around 20kHz, so at a half band frequency a maximum attenuation is already reached
This is true in most practical situations where compute power does not allow for extremely steep filters. But the steeper your filter can be (requiring more compute power) the closer to the Nyquist frequency you can attenuate. And the closer to the Nyquist frequency you attenuate, the more intact high frequency info (and therefore timing info) you retain
 
Aug 6, 2023 at 1:17 PM Post #29 of 34
Hi GoldenOne!

Thanks for your posts! I don’t have the time to respond in full at the moment, and I’m replying here on the virtual keyboard on my iphone so I’m keeping this brief.

But for now, there’s one thing I need clarification on from your post.

You say

“ And we can also see how unfiltered playbacked causes problems when we look at other signal types. Take a regular old 15khz sine wave for example: “

And there’s an image of a very messed up waveform that should be a 15kHz sine wave.

This confuses me! There’s absolutely no illegal frequencies in a pure 15khz sinewave and I’d expect that a DAC fed a digitally created 15kHz sinewave sampled at 44.1kHz to output a voltage that followed a 15khz sinewave pattern in large.

If you looked closely at the transitions of voltages at a fine timescale you’d see the discrete steps in output voltage, but the picture you’re showing of voltage vs. time there is badly non-sinusoidal in large.

Can you please elaborate and explain?

What am I missing? Because I’m pretty sure my Denafrips Ares DAC in NOS mode outputs a very nice looking 15khz sinewave on my high bandwidth oscilloscope.
 
Aug 6, 2023 at 1:35 PM Post #30 of 34
And there’s an image of a very messed up waveform that should be a 15kHz sine wave.

This confuses me! There’s absolutely no illegal frequencies in a pure 15khz sinewave and I’d expect that a DAC fed a digitally created 15kHz sinewave sampled at 44.1kHz to output a voltage that followed a 15khz sinewave pattern in large.

If you looked closely at the transitions of voltages at a fine timescale you’d see the discrete steps in output voltage, but the picture you’re showing of voltage vs. time there is badly non-sinusoidal in large.

Can you please elaborate and explain?
NOS operation (non-oversampling) means the DAC moves to the value described by the sample, then simply holds at that value until the next sample arrives. At which point it moves to the value of the new sample.

The image below shows a 15khz sinewave, as described by 44.1khz samples. The samples are the green squares, with a proper sinc oversampling/interpolation being represented by the green line, and then a NOS (zero-order-hold) method shown by the red path.

1691342797305.png


You can see that the green line is producing the smooth 15khz sine wave, but that's because it has been reconstructed by filtering out the unwanted high frequency content, hence why it does not just move directly from sample to sample and in fact goes above/below the max/min sample values too.

The NOS sine looks unusual because whilst that 15khz fundamental is there (as you can see on the FFT in the original post), so is all of the content above 15khz which then makes it look like this.

You cannot create a 'correct', smooth sine from so few samples by just doing sample and hold (as you'll end up with what the red line shows above) or even via linear interpolation. You need to use proper sinc filtering/reconstruction, which is what most DACs do.

What am I missing? Because I’m pretty sure my Denafrips Ares DAC in NOS mode outputs a very nice looking 15khz sinewave on my high bandwidth oscilloscope.
So this will depend on specifically which version of the Ares you have. Denafrips has claimed 'NOS' capability for their DACs multiple times in the past but thus far have not yet actually released a DAC with TRUE non-oversampling capability.

Original Ares 2, Terminator Plus and presumably other models used linear interpolation in their 'NOS' mode. Meaning they were oversampling, but literally just drawing a straight line from one sample to the next. As such a 15khz sinewave output looked like this:

1691343143355.png


Later on such as with the pontus 2 12th anniversary they updated stuff to use a sample and hold oversampling method, but it was still oversampling and you could demonstrate this by the fact that there was still imaged content, just only up past 768khz.

I've got some more info on this topic at the end of this post if interested: https://goldensound.audio/2021/10/07/denafrips-terminator-plus-with-gaia-measurements/

Could also just be that you're not actually in NOS mode. If you output a 1khz sine for example and it looks completely smooth (not stairstepped), then you're not in any sort of sample-and-hold operation and therefore definitely not NOS.

Basically, with any NOS DAC, you get that 'stairstepped' output. To avoid it, you need to filter out the ultrasonic components that make up that square shape, ie: oversample with a proper filter
 

Users who are viewing this thread

Back
Top