If we measure the frequency response of a tape recoder around -50dB, we will have to consider that tape is indeed a high resolution format with a very extended frequency response.
And have you measured the frequency response of a tape recorder to signals "around -50dB"? If you haven't, then even by your own terms you do NOT have to consider that tape is a high resolution format!
Here are the measurements of pretty much all the most widely used studio tape recorders:
http://www.endino.com/graphs/
Note though:
1. That these measurements are not in response to a -50dB signal but in response to an optimal (0dBVU) signal, they would obviously be worse for a -50dB signal.
2. These tests were run on freshly calibrated/aligned recorders. Which is NOT the case with in music studio recordings. Pro studio tape recorders were aligned once, at the beginning of each day, not for each recording pass.
3. A music analogue tape master is the result of at least 2 (and typically more) tape "generations", each generation introducing more noise, more distortion and more signal loss.
4. Notice the comparison with the last measurement, the frequency responses of the built-in ADC of a mid 1990's consumer desktop computer!!
It's unlikely that an acoustic ultrasonic signal at -50dB would be above the noise floor on a typical analogue music master.
As of tape recording producing harmonic distortion, It may be considered problematic, but I don't think that it can explain the fact tape sounds more natural that basic
digital audio.
You're right, it doesn't "explain the fact" because there is no scientific explanation of a fact that is false! For example, there is no scientific explanation for the fact that pigs can fly! In addition to the reliable evidence that's already been posted by Old Tech that demonstrates your fact is false, you also refuse to address the obvious question: How does adding unnatural noise and distortion make an acoustic instrument sound more natural?
Tape and high sample rate digital produce a better sound stage than 44.1. Do you think that we could explain this with harmonic
distortion ?
Same again: No, we have no scientific explanation for why figs can fly! The actual fact is that tape has worse soundstage than 44.1, due to crosstalk and other distortions, while higher sample rates have the same soundstage a 44.1.
he only reason I listen to 24/192 over 16/44.1 in most cases is because the song is often mastered differently, and you can tell when the artist has some fun in making objects move or adding reflections/reverberations in the 24/192 that the 16/44.1 doesn't have. Does that make one better than the other just from that? No each is a different listening experience.
In terms of actual quality of a song that is mastered identically in 24/192 vs 16/44.1, I thought I heard a difference, but I am nearly 100% sure I couldn't tell in blind testing, and it was a phycological thing.
Unfortunately, your first assertion can often be true, because your second assertion is always true! In controlled double blind tests, no one has been able to distinguish 16/44.1 downsampled from a higher resolution master ("controlled" in this context means certain conditions, such as reasonable listening levels, typical filters, etc.). Of course, that's a very inconvenient fact if you want to charge more money for a 24/192 version than the 44.1kHz version. So it's not uncommon for record labels or distributors to change the 44.1kHz version enough so that there is an audible difference.
Incidentally, this "change" is usually just additional audio compression, rather than a change made to the reverberation or positioning, the latter isn't really practical to try and change during mastering as it's already been baked into the mix. However, as additional audio compression can change the freq response and the relative balance between the reverb and the direct signal, it can affect the perception of reverb and/or positioning.
G