NOS dacs and upsampling

Apr 3, 2025 at 5:55 AM Post #61 of 74
Yes, personal preference plays a significant role here - and so does the playback chain. Objectively less accurate options can still be preferred, sometimes because they mask flaws, and other times because they emphasize certain characteristics. It’s the same reason why some people prefer tube gear, even though it introduces harmonic distortion (I use them too).

However, these preferences aren’t universal, and it’s difficult to generalize across all listeners. Ideally, I would hope that most of us lean toward a preference for accurate playback. A good example is the development of the Harman target curve - while the majority of listeners may prefer it over other alternatives, there’s still room to personalize it based on individual taste or the headphone.
Are you familiar with the ideas behind the design of the Riviera AIC-10?
https://www.rivieralabs.com/en/project/technology/
The idea that tuning distortion profiles to match the ear's self-distortion to allow to brain to filter it out is quite fascinating to me. If this is a valid theory it would mean the brain still prefers the lower-distortion sound, but then simply after brain "preprocessing" and not before.
 
Apr 4, 2025 at 9:48 AM Post #62 of 74
Are you familiar with the ideas behind the design of the Riviera AIC-10?
https://www.rivieralabs.com/en/project/technology/
The idea that tuning distortion profiles to match the ear's self-distortion to allow to brain to filter it out is quite fascinating to me. If this is a valid theory it would mean the brain still prefers the lower-distortion sound, but then simply after brain "preprocessing" and not before.
Interesting read, thanks.
 
Apr 5, 2025 at 4:23 AM Post #63 of 74
https://www.rivieralabs.com/en/project/technology/
The idea that tuning distortion profiles to match the ear's self-distortion to allow to brain to filter it out is quite fascinating to me. If this is a valid theory …
It’s not a valid theory but it is interesting marketing. It’s actually based on a common audiophile marketing technique dating back half a century or so but it’s interesting to see the specific cherry-picking, misrepresentation and “lies of omission” employed in this particular example.

To appreciate it’s just marketing, you only have to consider how recordings are created. If you find it “quite fascinating” and want to discuss the details of the science/pseudoscience, etc. in the article you posted, then we’re not allowed to do that here, science/facts are only allowed in the Sound Science forum. So you’d need to post it there.

G
 
Apr 5, 2025 at 7:54 AM Post #64 of 74
It’s not a valid theory but it is interesting marketing. It’s actually based on a common audiophile marketing technique dating back half a century or so but it’s interesting to see the specific cherry-picking, misrepresentation and “lies of omission” employed in this particular example.

To appreciate it’s just marketing, you only have to consider how recordings are created. If you find it “quite fascinating” and want to discuss the details of the science/pseudoscience, etc. in the article you posted, then we’re not allowed to do that here, science/facts are only allowed in the Sound Science forum. So you’d need to post it there.

G
I think this subject is on-topic enough to discuss it here, provided if we don't drag on for 30 posts.
In order to establish common ground i would like to ask you two questions:
1. Do you agree the ear generates its own distortion, increasing with SPL?
2. Do you agree the brain perceives a pure tone despite ear-generated distortion being present?

These two seem to be foundational to the theory and also the most easily verified.
 
Apr 5, 2025 at 8:07 AM Post #65 of 74
I think this subject is on-topic enough to discuss it here, provided if we don't drag on for 30 posts.
It doesn’t matter if it’s on topic, science/facts is only really allowed in the sound science forum.
In order to establish common ground i would like to ask you two questions:
1. Do you agree the ear generates its own distortion, increasing with SPL?
2. Do you agree the brain perceives a pure tone despite ear-generated distortion being present?

These two seem to be foundational to the theory and also the most easily verified.
I’ll answer your questions briefly but any detail would require science/facts.

1. The human ear does generate quite a lot of distortion, some of it increases with SPL, some doesn’t, in fact some actually decreases with higher SPL (up to a point).
2. The brain almost never perceives a pure tone because natural sounds and music don’t contain only a pure tone. However, some of the ear’s distortions are compensated by the brain and some aren’t.

G
 
Apr 5, 2025 at 9:01 AM Post #66 of 74
It doesn’t matter if it’s on topic, science/facts is only really allowed in the sound science forum.
Don't worry I don't think anyone is going to snitch to the mods
2. The brain almost never perceives a pure tone because natural sounds and music don’t contain only a pure tone. However, some of the ear’s distortions are compensated by the brain and some aren’t.
Interesting! Do you know which ones are compensated and more importantly which ones aren't?
 
Apr 5, 2025 at 9:23 AM Post #67 of 74
Interesting! Do you know which ones are compensated and more importantly which ones aren't?
Our brains are constantly compensating for what our ears are hearing. Most obviously, you’re constantly hearing a very loud rhythmic thumping (your heart beating). However, you can’t perceive it because your brain filters it out, although sometimes when frightened, under stress or heavy exercise your brain doesn’t filter it out and you can perceive it. Some specific ear distortions aren’t compensated/filtered out by our brains, we are consciously aware of them, for example certain IMDs (called “Tartini Tones”) and the equal loudness contours. In addition, our brains will very commonly invent it’s own distortions/tones (“the missing fundamental” for example) or even entire sounds.

G
 
Apr 5, 2025 at 10:17 AM Post #68 of 74
Our brains are constantly compensating for what our ears are hearing. Most obviously, you’re constantly hearing a very loud rhythmic thumping (your heart beating). However, you can’t perceive it because your brain filters it out, although sometimes when frightened, under stress or heavy exercise your brain doesn’t filter it out and you can perceive it. Some specific ear distortions aren’t compensated/filtered out by our brains, we are consciously aware of them, for example certain IMDs (called “Tartini Tones”) and the equal loudness contours. In addition, our brains will very commonly invent it’s own distortions/tones (“the missing fundamental” for example) or even entire sounds.

G
I knew about the missing fundamental but I'll look the Tartini tones up!

So it seems we agree that the ear generates self-distortion, and that the brain is good at filtering most of them out (but not all), but you disagree that you can present the ear with a fundamental + distortion pattern produced by audio equipment mimicking the self distortion to take advantage of this filtering effect?
 
Apr 5, 2025 at 10:58 AM Post #69 of 74
So it seems we agree that the ear generates self-distortion, and that the brain is good at filtering most of them out (but not all),
I just gave two examples of distortions our brains don’t filter out, I don’t know off the top of my head how many more there are and therefore if our brains are “filtering most of them out”.
but you disagree that you can present the ear with a fundamental + distortion pattern produced by audio equipment mimicking the self distortion to take advantage of this filtering effect?
Absolutely I disagree! You could hypothetically but not in practice. The problem is that it’s variable, the “distortion pattern” is going to depend on various features of the ear; the pinnae, the ear canal size/geometry, the response to SPL and various other factors. These factors will change with age, are different between men and women, are different between different individuals and are different even between the two ears of the same individual. Hypothetically then, you’d need to somehow measure and figure out a “distortion pattern” transfer function for each ear of each individual and then apply that transfer function to each channel, which would only work with headphones. In practice how would a tube amp do any of that?

However, you’re missing the bigger picture, which I alluded to previously: You’re not just reproducing single tones from a signal generator, you’re reproducing audio recordings and how are those recordings created, what determines the “distortion profile” already within the music recordings you’re trying to reproduce?

G
 
Apr 5, 2025 at 12:51 PM Post #70 of 74
Absolutely I disagree! You could hypothetically but not in practice. The problem is that it’s variable, the “distortion pattern” is going to depend on various features of the ear; the pinnae, the ear canal size/geometry, the response to SPL and various other factors. These factors will change with age, are different between men and women, are different between different individuals and are different even between the two ears of the same individual. Hypothetically then, you’d need to somehow measure and figure out a “distortion pattern” transfer function for each ear of each individual and then apply that transfer function to each channel, which would only work with headphones. In practice how would a tube amp do any of that?
Hmm that makes a lot of sense, so the only way it would work is if there was some shared base that's roughly the same for each individual, and it would require the brain's filter to be sensitive enough to not require an exact match.
However, you’re missing the bigger picture, which I alluded to previously: You’re not just reproducing single tones from a signal generator, you’re reproducing audio recordings and how are those recordings created, what determines the “distortion profile” already within the music recordings you’re trying to reproduce?
I think what you mean is that real music gets convolved with your ears "self distortion" function, in the case of the reproduced signal gets convolved with both the amplifiers distortion profile and your ear's? That would then only work if the amp profile is some kind of eigenfunction where convolving it with the ear profile doesn't change the harmonic structure.

Definitely agree that you can't see real music as a superposition of pure tones which then get processed individually by the ear/brain.

Thanks for your input, it is certainly food for thought.
 
Apr 5, 2025 at 2:10 PM Post #71 of 74
I think what you mean is that real music gets convolved with your ears "self distortion" function, in the case of the reproduced signal gets convolved with both the amplifiers distortion profile and your ear's?
No, that’s not really what I meant. I was talking about what the signal is that you’re reproducing, the music recording itself, how is it created and what does it contain? The recording already has a “distortion profile”, microphones and mic pre-amps produce distortion and often deliberately so, electric guitars are pretty much nothing but distortion, bass guitars almost the same, synth sounds typically have a lot of distortion, drum kits are massively processed/distorted and then more distortion is typically added during mixing and mastering. 60 years ago that was achieved using tape saturation, overdriving mic amps, compressors and limiters. These days we can be far more choosy, with a whole raft of very highly configurable distortion plugins, transient shapers, modelled vintage gear and endless tools to manipulate the freq content of both the audio and the added distortion (PEQ being an obvious example).

The important part here is; on what basis is all this manipulation and distortion being added/created? Obviously it’s being done by the musicians (in the case of e-guitars, basses and synth patches), by the mix engineer and the mastering engineer and they are incredibly choosy (almost to the point of being anal) about exactly what distortion and how much is applied but their basis for all these choices is their own human hearing. In other words, if we’re just talking about a sort of general human “distortion profile”, then that is already baked into the recording that you’re reproducing because a general human distortion profile was what the musicians/engineers were relying on when creating the recording’s content. Take for example the equal loudness contours (I assume you’re familiar with them), let’s say an amp manufacturer thinks to themselves; “I know, let’s create an EQ curve to compensate for the way we perceive music/sound, so a sort of inverse of the equal loudness contour, EG. A bit of a reduction around 3kHz because our hearing artificially boosts that frequency region and a lot more bass and treble because our hearing rolls-off a lot in those ranges”. Sounds like a good idea but it would actually be a terrible idea because the mix and mastering engineers have created the mix and master according to their hearing, so the recordings you’re playing already automatically have a compensation for the equal loudness contour, otherwise it would have sounded way wrong to the engineers. An amp that actually did that compensation would effectively result in a playback where the equal loudness contour had been compensated for twice and it would sound terrible (although you might find someone who likes it).

The best thing you can do, is just playback the recording with the best fidelity you can and then the “distortion profile” will be as correct as practical. Incidentally, the amplifier’s distortion profile is irrelevant because in virtually all cases it’s inaudible, the only exception is one or two rare tube amps with such horrifically bad distortion that it’s actually audible (or user error, using the wrong amp for the task).
Definitely agree that you can't see real music as a superposition of pure tones which then get processed individually by the ear/brain.
Real music is of course just a superimposition of pure tones but we don’t perceive them as such, we just perceive sounds/instruments with a “timbre” rather than all the individual harmonics separately. The human ear is especially good at separating these sounds though, current evidence suggests we’re better at this listening task than any other animal, even those with more sensitivity and a greater frequency range.

G
 
May 3, 2025 at 6:12 PM Post #73 of 74
Is there really anyone who can't hear above 20kHz but could reliably pass ABX tests between DAC filters that don't roll off before 20kHz?
This question should always be accompanied with the test setup. I can very easily differentiate poly-sinc-hb-xs, poly-sinc-hb-s...poly-sinc-hb-l with Holo May -> Holo Bliss -> Susvara. Sound gradually loses edge of transients, but gain macro dynamics and space on every step towards longer ones. However, I tried HQP with Qutest -> Phonitor E -> Utopia and I wasn't blown away. I also have SMSL DL200 for TV use and at least it's internal filters sound all the same, but haven't tested it with HQP. For me personally, it wasn't until I got May when I understood what the fuss is about.

My current view is that if one wants to test filters properly, the minimum requirements to achieve ceteris paribus are:
  • DAC is a good R2R dac with at least 16bits resolution
  • HQPlayer needs to be given the test music bit perfect
  • Headphone amp the very minimum A90 Discrete level
  • Balanced connections
  • Good planar or estat headphones
    • Dynamic headphones are not showing those differences the same way as they are slower
  • Test group has long experience with TOTL setups
    • While the difference is audible (actually tested different filters with my childhood friend one weekend and he has zero experience with headphone hifi and he immediately heard the difference), for more reliable testing people should know what to listen to
      • Compare to situation in which someone like me goes to wine testing
        • In my previous job I went to wine testing where we got some really nice and expensive wines and it all tasted the same to me, while more experienced people had very clear opinions
IMO when comparing filters we are in the deep end in this hobby. Those do not make sub-2000$ setup owners wow. Hearing those differences demands very specific qualities from chain (like bit perfect reproduction of what HQP produces, which is not the case with pretty much any delta-sigma dac) and also very hiqh quality components in general. Filters are like the downforce spoiler in cars. It's just a gimmick in 99% of the cars, but for that 1% of the cars it makes all the difference on a track if driven right.
 
Last edited:
May 4, 2025 at 5:32 AM Post #74 of 74
Is there really anyone who can't hear above 20kHz but could reliably pass ABX tests between DAC filters that don't roll off before 20kHz?
If we take that as two questions, the first part effectively means all adults and probably all adolescents as well. Some very young children (babies to pre-school age) can hear up to 22kHz or so, although their ability to identify what they’re hearing (their “listening skills”) are poor compared to adults, as listening skills come with experience and training. By their late teens, the cut-off is around 17.5kHz and exceptionally up to about 19kHz. Taking the average of all adults up to retirement, it’s around 16kHz.

The second part, passing an ABX between filters that don’t roll-off before 20kHz, is not possible and indeed there is very robust/reliable evidence demonstrating not only that it’s impossible but why. However, we have to be very careful about “conditions” and this is where it can get a little complex, although only in the audiophile world (rather than the wider audio world): Firstly and most obviously, we have to ensure that it is actually the filter we’re testing. EG. The process of up/over sampling (which is what requires an anti-image filter) will typically also include the application of dither, so we need to ensure we’re not testing the audibility of the application of dither rather than the filter. Another example would be a failure to provide adequate headroom for ISPs when up/over sampling, which would easily cause audible distortion. Secondly, it is possible to design a filter with other artefacts that maybe audible, for example some huge phase anomaly lower, in the audible band. That would have to be deliberate though and there’s no reason anyone would design such a filter unless for marketing purposes. Another condition is that the DAC used for the test is functioning correctly and this is where we can run into issues specific to the audiophile world, because there are some DACs only marketed to the audiophile world that effectively do not function correctly. An example is R2R NOS DACs. R2R DACs (and ADCs) do not exist in the pro-audio and wider audio world and NOS DACs do not function correctly, they typically do not perform an essential/required step in digital to analogue conversion. It is possible/likely that the presence of “images” above 20kHz (from a failure to perform the required anti-image filtering) will cause IMD in the audible band, in the analogue section of DAC itself or further down the line (amp or transducers). Therefore a filter rolling-off somewhere beyond the Nyquist Frequency would allow some amount of “images” to pass through and possibly cause audible IMD.

So, the correct way to test filters is to eliminate the the first two conditions above and contrary to rayon’s post, use a correctly functioning DAC (to cover the last condition above) not a TOTL DAC. Because unfortunately, R2R/NOS DACs are often incorrectly considered (by the audiophile community) as TOTL DACs.

G
 
Last edited:

Users who are viewing this thread

Back
Top