Crgreen
500+ Head-Fier
- Joined
- Apr 22, 2016
- Posts
- 665
- Likes
- 304
Look forward to listening to those files. I understand the Bridgewater Hall, a wonderful acoustic, has very good recording facilities 

The million dollar question in a headphone forum... What's Rob's favourite headphone?
Yes but it's nothing special, just active computer speakers. Actually, I am looking to buy some portable passive loudspeakers with Dave driving them directly. For serious listening I use the AQ Nighthawks....
Rob
...But to answer your question why transient timing improves with tap length I can illustrate the two extremes; a tap length of 1 tap (a NOS filter followed by analogue filtering) and an infinite tap length filter. Here is a couple of slides from my Mojo presentation:
Now this is just a simple illustration and it shows the two extremes one tap filter giving worst case 100uS or so timing error; and an infinite tap length filter reconstructing the transient perfectly. Somewhere in-between we will get acceptable levels of timing errors - but the only way we can test for this is to build long tap length filters, and change the parameters - tap length, oversampling rate, and algorithm - then keep listening - and that's what I have been doing for the past 20 years.
But what is really cool is that with Davina we can actually know for certain what the subjective losses are with 1M tap length filters. Ideally, the most powerful listening test is one where you can hear no difference from say 768>decimate 48>interpolate 768, and Davina will tell me how much difference we get in absolute terms. I will be publishing files - the original, and the decimated/interpolated version. In case the bandwidth limiting changes the sound (it will probably make it sound better) then I actually have three files - original, bandwidth limited, bandwidth limited/decimated/interpolated.
Sampling theory has nothing to say about bandwidth limiting - only that at FS/2 and greater it must be 0 output. I have already designed 300 db bandwidth limiting filters, so it will be very curious to see how these actually sound.
In 2013 a paper was filed in a physics journal, and this talks about the Fourier uncertainty, and the importance of timing. On this website, they have some samples where the signal has identical frequency response, but timing information has been destroyed, so try playing some of these tracks:
http://phys.org/news/2013-02-human-fourier-uncertainty-principle.html
The original paper is very interesting to read, but is not easy to follow. Now Fourier uncertainty is the timing problem characterized mathematically. Now I have always felt that we needed to minimize Fourier uncertainty by making sure the windowing function was greater than 1 second (this requirement was from listening to other problems) - and guess what - we only get windowing functions of greater than 1s with 1M 16FS filters.
Rob
Above illustration is a bit puzzling to me. The main reason being that a bandwidth-limited signal can't start and stop immediately – sharp transients need infinite bandwidth, since they rely on high frequencies –, so it doesn't look right, even with the 1 million taps. My idea of the million-taps benefit actually was that the transient-corrupting low-pass filter resonance is concentrated on the filter frequency for the most part, whereas frequencies below show almost no pre- and post-ringing. Or more accurately: the ringing – which is still there – contains no audible frequencies anymore. With an infinite number of taps, resulting in an infinite filter sharpness and steepness, the ringing would last infinitely. And that's what I miss in above graph, too. Or what else am I missing?
...But to answer your question why transient timing improves with tap length I can illustrate the two extremes; a tap length of 1 tap (a NOS filter followed by analogue filtering) and an infinite tap length filter. Here is a couple of slides from my Mojo presentation:
Now this is just a simple illustration and it shows the two extremes one tap filter giving worst case 100uS or so timing error; and an infinite tap length filter reconstructing the transient perfectly. Somewhere in-between we will get acceptable levels of timing errors - but the only way we can test for this is to build long tap length filters, and change the parameters - tap length, oversampling rate, and algorithm - then keep listening - and that's what I have been doing for the past 20 years.
But what is really cool is that with Davina we can actually know for certain what the subjective losses are with 1M tap length filters. Ideally, the most powerful listening test is one where you can hear no difference from say 768>decimate 48>interpolate 768, and Davina will tell me how much difference we get in absolute terms. I will be publishing files - the original, and the decimated/interpolated version. In case the bandwidth limiting changes the sound (it will probably make it sound better) then I actually have three files - original, bandwidth limited, bandwidth limited/decimated/interpolated.
Sampling theory has nothing to say about bandwidth limiting - only that at FS/2 and greater it must be 0 output. I have already designed 300 db bandwidth limiting filters, so it will be very curious to see how these actually sound.
In 2013 a paper was filed in a physics journal, and this talks about the Fourier uncertainty, and the importance of timing. On this website, they have some samples where the signal has identical frequency response, but timing information has been destroyed, so try playing some of these tracks:
http://phys.org/news/2013-02-human-fourier-uncertainty-principle.html
The original paper is very interesting to read, but is not easy to follow. Now Fourier uncertainty is the timing problem characterized mathematically. Now I have always felt that we needed to minimize Fourier uncertainty by making sure the windowing function was greater than 1 second (this requirement was from listening to other problems) - and guess what - we only get windowing functions of greater than 1s with 1M 16FS filters.
Rob
Hi Rob
Above illustration is a bit puzzling to me. The main reason being that a bandwidth-limited signal can't start and stop immediately – sharp transients need infinite bandwidth, since they rely on high frequencies –, so it doesn't look right, even with the 1 million taps. My idea of the million-taps benefit actually was that the transient-corrupting low-pass filter resonance is concentrated on the filter frequency for the most part, whereas frequencies below show almost no pre- and post-ringing. Or more accurately: the ringing – which is still there – contains no audible frequencies anymore. With an infinite number of taps, resulting in an infinite filter sharpness and steepness, the ringing would last infinitely. And that's what I miss in above graph, too. Or what else am I missing?
That's the 64 k $ question - with timing problems removed on the decimation and interpolation side, it should sound pretty special...
That's the 64 k $ question - with timing problems removed on the decimation and interpolation side, it should sound pretty special...