Apodizing filter

Mar 8, 2024 at 5:45 PM Post #211 of 426
No it is just one parameters, which is why I said everything else being equal. You can have a nimble transducer, but if you have a poorly designed magnetic circuit and introduce local modes, then distortion is assured. If you have two of the same design with the same distortion issues, the nimble one will be more transparent in spite of the flaws.

Then this is something manufacturers should be doing R&D on. It probably isn't something a consumer can use to make a decision which product to buy.

Why is it so?

Because once something is transparent if it's transparent by a yard, it's the same as something transparent by a mile. If human ears can't hear a difference, it's transparent. The measurements may be different, or the theory might be different, but the sound is the same. It can't be more transparent than transparent.
 
Mar 8, 2024 at 9:50 PM Post #212 of 426
Then this is something manufacturers should be doing R&D on. It probably isn't something a consumer can use to make a decision which product to buy.
Because once something is transparent if it's transparent by a yard, it's the same as something transparent by a mile. If human ears can't hear a difference, it's transparent. The measurements may be different, or the theory might be different, but the sound is the same. It can't be more transparent than transparent.
But, if there is not an established method to measure transparency that is published and researched, where does one draw a line on audibility? Nothing is as clear cut, there is always a fuzzy boundary and the width of the boundary may depend on what is being tested and also the system as a whole and the listener and it is best looked at as possibility values rather than a hard threshold. For example when do you stop hearing a difference in the tonal characteristics? It will depend on a lot of factor including the system preproducing the sound not just the transducer. It is not as simple as saying if the change is below 1dB no one can hear it, what if it is 1.1dB or 0.9dB? What if the AC is running? trained listener vs untrained, and so on.
 
Mar 8, 2024 at 10:10 PM Post #213 of 426
But, if there is not an established method to measure transparency that is published and researched

There are lots of established thresholds of transparency that have been established through testing. What made you think transparency is something that can't be defined? You just isolate the variable and conduct controlled ABX tests.

Google JDD (just detectable difference) and threshold of audibility and you'll certainly find some tests to read about. Join the AES (Audio Engineering Society) and access their library of studies and you'll find lots of them.

If there's a specific threshold you're interested in, ask here. Someone might know.
 
Last edited:
Mar 8, 2024 at 10:33 PM Post #215 of 426
Great. Those search terms should work there too.
 
Mar 9, 2024 at 6:45 AM Post #216 of 426
There are lots of established thresholds of transparency that have been established through testing. What made you think transparency is something that can't be defined? You just isolate the variable and conduct controlled ABX tests.

Google JDD (just detectable difference) and threshold of audibility and you'll certainly find some tests to read about. Join the AES (Audio Engineering Society) and access their library of studies and you'll find lots of them.

If there's a specific threshold you're interested in, ask here. Someone might know.
@KMann isn't saying anything controversial.
What do you think of the 20kHz limit given for humans? Is that a clear threshold applicable to all people of all ages under any conditions? No, it's BS. Most people can't come close at reasonable listening level, while kids can often go beyond. What it is, is a nice round number. The threshold for a population is more realistically shown as a range of frequencies. And with more variables, and less tested conditions, I think "fuzzy boundaries" is a fine and often even generous image.
 
Mar 9, 2024 at 7:11 AM Post #217 of 426
I’ve already said that the range of human hearing extends downward from the thresholds. People have varying degrees of hearing degradation. They don’t hear very far beyond the established JDDs, because the threshold is best case. A 12 year old girl might be able to hear a tiny bit beyond 20kHz, but that would be exceedingly rare and not representative. By the time she hits 15, she will be in the normal range with the rest of us. The best case boundries aren’t really fuzzy, they’re more like brick walls.

The problem with audiophiles is that the best case thresholds are never good enough for them. Maybe 99.9999% of the population can’t hear above 20kHz, in fact many can’t hear beyond 16… but they don’t identify with the majority. They convince themselves that owning a fancy expensive stereo gives them the superhuman ability to hear better than a .0001% 12 year old girl. 20kHz isn’t good enough for them. So they go out and by HD tracks with a 96 sampling rate as if they could hear a full additional octave. Then they start thinking about how 192 must sound considerably better because it’s such a big number… rinse and repeat across every aspect of sound fidelity.

Too much is never enough.

I’m quite satisfied with a stereo system that meets or exceeds the thresholds of best case human hearing. If it goes to 20 and I can only hear to 16 or 17, that is fine. A little overkill is nice so everyone can enjoy the sound, even one-in-a-million 12 year old girls.
 
Last edited:
Mar 10, 2024 at 8:19 AM Post #218 of 426
A case can be made for an apodizing filter where the filter cuts off at 20kHz and opening up the possibility of reducing any aliased frequencies that folded back due to the ADC's filters not being good enough. So, objectively it could be argued that an apodizing filter can 'correct' for some of the ADC errors.
True, but of course that is just an explanation of how a filter could theoretically correct for some recording/ADC errors. It is conditional on an ADC actually having filters that are not good enough and actually having enough content in the signal we’re recording that is above the Nyquist freq to cause significant aliasing in the 20kHz - 22.05kHz band. The poster who we’re questioning (@HIPlayerDSD) has not yet provided any examples of recordings that actually fulfil these two conditions.
The flip side to this is, given it is above the audible range, will it matter subjectively?
That’s where it get‘s complicated or at least the marketing does (!), because it uses semantics of what an anti-imaging/reconstruction filter should actually do. For me, it seems obvious that it should reduce images and reconstruct the audible band, which has been relatively trivial to achieve for many years. So, if there is some audible difference, it is because the filter is effectively faulty.

G
 
Mar 10, 2024 at 9:11 AM Post #219 of 426
I think there is a case to be made that messing around with this stuff is essentially signal processing, like equalization or crossfeed. You aren’t simply reproducing the signal accurately, you’re making a subjective decision to “enhance” it. The argument that you’re correcting errors in the original recording isn’t really improving fidelity, because the job of a DAC is to simply translate digital to analog faithfully. It isn’t supposed to alter the sound. So you aren’t correcting errors, you’re manipulating the sound.

If people choose to use filters that make an audible difference, that is fine for them. Personally, I think the DAC is the wrong point in the chain to be applying signal processing, but they can monkey around with whatever they want with their own system. But they can’t really claim that their audible filter is somehow more accurate than a standard transparent one. Of course, that’s exactly what they do, so go figure.
 
Last edited:
Mar 10, 2024 at 10:26 AM Post #220 of 426
The argument that you’re correcting errors in the original recording isn’t really improving fidelity, because the job of a DAC is to simply translate digital to analog faithfully. It isn’t supposed to alter the sound.
I agree with this

Personally, I think the DAC is the wrong point in the chain to be applying signal processing,
If you really narrow the scope of what Signal processing means to your previous statement then yes. Because applying a digital filter for the purpose of removing out of band images is still signal processing.

For me, it seems obvious that it should reduce images and reconstruct the audible band,
I agree with this. However, showing the relationship between reconstruction accuracy in the audible band and reducing the out of band images is not a trivial one. One could argue, NOS DAC is enough as you do not hear the out of band images and all one needs to do is a simple EQ to correct the droop in the higher frequencies in the audible band.
 
Last edited:
Mar 11, 2024 at 3:13 AM Post #221 of 426
However, showing the relationship between reconstruction accuracy in the audible band and reducing the out of band images is not a trivial one.
I didn’t mean trivial in the sense that it’s simple but in the sense that it’s been possible for around 30 years or more, is well established/routine and is accomplished even by cheap DACs.
One could argue, NOS DAC is enough as you do not hear the out of band images and all one needs to do is a simple EQ to correct the droop in the higher frequencies in the audible band.
I don’t think one could rationally argue that a NOS DAC is enough, a “simple EQ” to correct the droop, a high-shelf boost with a low Q for example would also boost the level of the images. Although the images are beyond the range of human hearing, they may well be of sufficient magnitude to cause IMD within the audible band and even more likely if you’re also boosting them with an EQ shelf. I would think there would/could be additional issues from just using “a simple EQ”, inter-sample peak levels for example.

G
 
Sep 23, 2024 at 6:54 PM Post #222 of 426
If you had a 'problematic' signal, with ringing caused by the ADCs or DSP used in production/recording, then an apodizing filter is arguably preferable.
If I ever discovered I had specific songs with this problem in my collection, frankly I'd rather apply the filtering to the files themselves with Audacity than select an apodizing filter in my DAC and dull the highs on all of my other tracks.
 
Sep 24, 2024 at 6:08 AM Post #223 of 426
If I ever discovered I had specific songs with this problem in my collection, frankly I'd rather apply the filtering to the files themselves with Audacity than select an apodizing filter in my DAC and dull the highs on all of my other tracks.
There are a couple of serious issues with the statement you quoted. Firstly, “ringing caused by the ADCs or DSP used in production/recording” is actually rare and when it does occur it’s both: Low magnitude and up near the Nyquist freq. So as it’s inaudible, how does that constitute a “problematic signal” and why would it be “arguably preferable” to change it? Secondly, “apodizing filter” is largely just a marketing term, it doesn’t necessarily tell us anything about the filter’s response, technically (AFAIK) most modern anti-alias or anti-imaging filters could be described as “apodizing”. Therefore, an “apodizing filter” may or may not (inaudibly) improve ringing, in fact it could actually make the ringing worse in specific production/recording cases (although again, inaudibly)! Lastly, despite several requests GoldenSound has not provided a single example of a mastered recording containing such “problematic signals” that have actually been improved by an apodizing filter, even baring in mind it would be inaudible anyway!

Unfortunately, a lot of the audiophile world depends on this sort of underhand/misleading rhetoric; take a theoretical problem, a theoretical problem that may never actually exist in practice, does exist but only in the digital or analogue domain rather than the acoustic domain or may transfer to the acoustic domain but is entirely inaudible, and then make a big deal about solving this non-problem or discussing/comparing it. While it’s certainly useful having someone like GoldenSound (and some others) objectively measuring these things, their conclusions/descriptions are sometimes highly misleading because the differences or variation from theoretically perfect are typically presented as meaningful to consumers when commonly they’re well beyond inaudibile and sometimes not even reproducible in the acoustic domain.

G
 
Last edited:
Sep 30, 2024 at 7:51 PM Post #224 of 426
apodizing filter” is largely just a marketing term, it doesn’t necessarily tell us anything about the filter’s response, technically (AFAIK) most modern anti-alias or anti-imaging filters could be described as “apodizing”
But it does, if you go back to the etymology of the word, like the Wikipedia article does. An apodizing filter was so named because it "removes the foot", i.e. in comparison to a worse filter that allows some egregious lobe or hump to pass right at the beginning of the stopband (or even centered on the cutoff frequency?), this one flattens that area out significantly and therefore "removes the foot" and therefore is a-pod-izing.

The Arcam SA20 is said to have an ESS Sabre 90382KM DAC chip. I haven't found this one on ESS's site, the closest match was the 9038Q2M.
https://www.esstech.com/wp-content/uploads/2022/09/ES9038Q2M-Datasheet-v1.4.pdf
[...]
The apodizing filter starts to roll off ever so slightly earlier than the fast linear phase filter but both start to roll of above 20kHz. The apodizing filter also reaches the -40dB (the output amplitude is 1% of the input amplitude) attenuation point comparatively faster than the fast linear phase filter so it is also a bit steeper. However, the stopband attenuation is a bit worse than the fast linear phase filter.
In this case there's very little to suggest it's "removing any foot", so it doesn't fit the name "apodizing" all that well.

A clearer example maybe is what they put in the ES9069, where stopband attenuation is again worse, but rolloff for the apodizing starts way back at 18k or so, and is already -5 dB or lower at 20k. I guess in that case the "foot" they expect to remove is clearly presumed to exist in the material itself, it's not some undesirable pass-lobe that they're doing away with from the response of the non-apodizing linear-fast.
 
Last edited:
Oct 1, 2024 at 4:31 AM Post #225 of 426
But it does, if you go back to the etymology of the word, like the Wikipedia article does. An apodizing filter was so named because it "removes the foot", i.e. in comparison to a worse filter that allows some egregious lobe or hump to pass right at the beginning of the stopband (or even centered on the cutoff frequency?), this one flattens that area out significantly and therefore "removes the foot" and therefore is a-pod-izing.
“Apodizing” does mean “removes the foot” but then you appear to have invented what you think the “foot” is. My maths knowledge is not that great, so maybe someone like @71 dB can explain it better or correct my understanding but AFAIK: “Apodizing” in practice just means effectively “multiplying” a function/waveform data with a “Window Function” where only the data within the “window” interval is multiplied by the window function, IE. The window overlap is the “foot” that is removed. Apodization is used in various fields but in the case of digital filters it is used to convert an infinite impulse response, such as Shannon/Whittaker’s ideal sinc function, into a finite impulse response filter. Therefore AFAIK, all FIR filters in practice could be described as “apodizing” and as FIR filters can have numerous/various properties depending on how they’re designed, the term “apodizing” does not necessarily tell us anything about what properties the filter may have. For example, it does not tell us whether the filter has a slow or fast roll-off or whether the filter ringing is pre, post or only post (although commonly they are used to shift the ringing more towards post).
I guess in that case the "foot" they expect to remove is clearly presumed to exist in the material itself, it's not some undesirable pass-lobe that they're doing away with from the response of the non-apodizing linear-fast.
Again AFAIK: But a fast linear phase filter can only be a FIR filter and therefore is apodizing. By your own admission you are just guessing what the “foot” is, rather than going on the actual definition of a Window Function (with a zero-value outside the window interval).

G
 

Users who are viewing this thread

Back
Top