Rob Watts
Member of the Trade: Chord Electronics
- Joined
- Apr 1, 2014
- Posts
- 3,229
- Likes
- 13,569
The term linearity test is a shortened term; the full term is fundamental amplitude linearity test. I quote this because it proves that Jude is absolutely correct in that one categorically must resolve the amplitude of the fundamental; to do this test properly one needs to resolve only the fundamental and not the distortion and noise. To do this completely accurately one needs to do an FFT so that only the fundamental amplitude is measured and absolutely nothing else.
This test grew out of extremely serious and obvious problems that early digital had; it could not resolve small signals accurately, due to inherent problems in R2R, DSD and delta sigma DACs. In the early 1990's, one could employ a simple analogue technique of filtering out all signals apart from the fundamental, then simply measuring and plotting the error as the signal fell. The errors in those days were considerable, in that +/- 2dB was not uncommon at -90dB. Today however, -90dB is pretty accurate, and the tell tale lift, using this simple test, is simply noise from the DAC and so is unimportant. My pulse array DACs, from 1995, resolved this issue, and meant that the traditional analogue technique was worthless, as it simply measured residual noise. So I always use FFTs, with careful calibration of a -60 dB signal and measuring at -120 db; indeed even this technique reveals no linearity error once a suitable number of averages are done.
That's not to say the AP is perfect; it's not. I have recently being upgrading this test, and getting it to resolve +/- of one LSB of 32 bit data. This is a -186.638 dB signal. To do this I need to set the AP to FFT at 6 kHz with a 1.2M point; this is so that I can actually resolve this tiny signal. With synchronous 128 averaging and using a 2.496 kHz signal I can get the observed noise floor to be centred at -214 dB, so that the -186.638 dB signal stands out like a sore thumb. And all my DACs resolve this signal - but always with a +0.6dB error. I am still trying to investigate this error, but since all my DACs (Hugo 2, TT2, Dave) do it with the same error, I am pretty sure it's an AP measurement issue (due to the ADC's fundamental linearity limit). For a signal at -120 dB, this error would translate into a +0.0003 dB - not detectable for the usual -120db levels.
So why would somebody choose to misrepresent this test? It may be ignorance; or it may be that the tester has other motives. Conventional delta sigma modulators (noise shapers) have amplitude linearity issues; as the wanted signal approaches the noise shaper's resolution limit, it can no longer respond to the signal, and essentially the amplitude gets smaller. This is easy to see on noise shaper simulations, and it's something I have eliminated (that's one reason why I test (using verilog simulation) my noise shapers with -301dB signals and it must perfectly reconstruct it). If you want to counteract this issue, then simply add the correct amount of noise using the conventional test; the loss in amplitude is balanced by noise replacing it. Thus tweaking the bandwidth to add an exact amount of noise to suit the desired DAC to give a "perfect" linearity plot is a way round this problem. But of course it is not science; it's just a way to tweak measurements you want to present, to suit the narrative that you may have.
Rob
This test grew out of extremely serious and obvious problems that early digital had; it could not resolve small signals accurately, due to inherent problems in R2R, DSD and delta sigma DACs. In the early 1990's, one could employ a simple analogue technique of filtering out all signals apart from the fundamental, then simply measuring and plotting the error as the signal fell. The errors in those days were considerable, in that +/- 2dB was not uncommon at -90dB. Today however, -90dB is pretty accurate, and the tell tale lift, using this simple test, is simply noise from the DAC and so is unimportant. My pulse array DACs, from 1995, resolved this issue, and meant that the traditional analogue technique was worthless, as it simply measured residual noise. So I always use FFTs, with careful calibration of a -60 dB signal and measuring at -120 db; indeed even this technique reveals no linearity error once a suitable number of averages are done.
That's not to say the AP is perfect; it's not. I have recently being upgrading this test, and getting it to resolve +/- of one LSB of 32 bit data. This is a -186.638 dB signal. To do this I need to set the AP to FFT at 6 kHz with a 1.2M point; this is so that I can actually resolve this tiny signal. With synchronous 128 averaging and using a 2.496 kHz signal I can get the observed noise floor to be centred at -214 dB, so that the -186.638 dB signal stands out like a sore thumb. And all my DACs resolve this signal - but always with a +0.6dB error. I am still trying to investigate this error, but since all my DACs (Hugo 2, TT2, Dave) do it with the same error, I am pretty sure it's an AP measurement issue (due to the ADC's fundamental linearity limit). For a signal at -120 dB, this error would translate into a +0.0003 dB - not detectable for the usual -120db levels.
So why would somebody choose to misrepresent this test? It may be ignorance; or it may be that the tester has other motives. Conventional delta sigma modulators (noise shapers) have amplitude linearity issues; as the wanted signal approaches the noise shaper's resolution limit, it can no longer respond to the signal, and essentially the amplitude gets smaller. This is easy to see on noise shaper simulations, and it's something I have eliminated (that's one reason why I test (using verilog simulation) my noise shapers with -301dB signals and it must perfectly reconstruct it). If you want to counteract this issue, then simply add the correct amount of noise using the conventional test; the loss in amplitude is balanced by noise replacing it. Thus tweaking the bandwidth to add an exact amount of noise to suit the desired DAC to give a "perfect" linearity plot is a way round this problem. But of course it is not science; it's just a way to tweak measurements you want to present, to suit the narrative that you may have.
Rob