The question is how small can be the smallest signal encoded in a 24 bit file; leaving aside the noise level of mic preamps, the best studio AD converters have a dynamic range not greater than 21-22 bits. So if the smallest signal would theoretically be at -130dB how come the DAC's noise shaper performance needs to go down to -350dB? Is this overhead needed for the noise shaper to work properly on -130dB signals?
You have exposed the problem, and frankly I have struggled with this issue of depth and needing 350 dB noise shapers. Because it does not make sense. Do we need digital modules capable of accurately resolving -301 dB (that's my tests now for all modules) because the brain can perceive -301 dB signals? Or is it because if a system is capable of resolving -301 it will resolve -120 dB signals more accurately?
The honest answer is I don't know for sure, but it strikes me as being absurd that the brain could (through correlation) need -301 dB signals.
Now
in principle a properly dithered digital system is capable of reproducing infinitely small signals - its just buried in the noise. But modern ADC's just aren't capable of resolving very small signals, as they employ 140 dB noise shapers. Which also then asks the question of how is it that Dave can improve depth reproduction when my existing noise shapers pre Dave were much better than modern ADC's?
I suspect the issue is really about how accurately -120 dB signals are being reproduced. When you look at small signals as they approach the resolving limit of the noise shaper, then the signal becomes attenuated. You can see this with fundamental linearity measurements, and easily with simulation. So a -120 db signal with a -160 dB limit, will have an attenuation of -0.087 dB - so the -120 dB becomes -120.087 dB. Now with 200 dB resolving noise shaper we would be looking at 0.0008 dB error - something you could not measure in reality. With 300 dB it drops to -0.00000008.
At 340 it would be -0.0000000008 dB error.
Now I could (maybe) be persuaded that a 200dB noise shaper (0.0008 dB at limits of measuring) going to 300 dB (0.00000008) could be audible. But 300 dB (0.00000008) going to 340 dB (0.0000000008) that is surely inaudible? But with careful listening I can indeed perceive the change, and I have severe difficulty understanding why depth perception is so sensitive to small errors.
Perhaps I am wrong, and that there is something else going on with noise shapers - but even purely digital noise shapers (say when you are going from 54 bits down to 24 bits internally to the FPGA) have this effect. I can only go on what the evidence follows me, even if it suggests that impossibly small errors are important.
What is very exciting is if we can hear such changes with poor ADC noise shapers, than imagine the changes we will hear with Davina that has 350 dB noise shapers. Perhaps we will crack the issue of actually perceiving things that are 100 m away sounding like they are actually 100 m away - that will be something.
Rob