I’ve always wondered why audio folks go to the mat over things that can’t be heard in normal use. It’s as prevalent in science circles as it is among subjectivists.
There's two serious issues with this assertion:
Firstly, it's false. Or rather it was false. SRC artefacts could be heard in normal use and it's because of science that it isn't (or shouldn't) be audible today.
Secondly, what is "normal use" and who gets to define it? For example, the digital audio to which you're listening could have anything from just 2 sample rate conversions up to dozens. So we have to consider the possibility of cumulative SRC artefacts being audible, not just of a single SRC process being inaudible. Science has to account for all of this, not just what you personally believe "normal use" to be.
You are also forgetting: There is latency, so the jitter affects different samples.
I'm not forgetting latency but maybe you're forgetting that latency doesn't affect jitter? Latency is the period of time required for (SRC) processing, which can be up to 30ms or so. However, that (say) 30ms processing delay is applied to all the samples, so there is no added jitter.
There is no reason to assume they have done the job properly.
Yes, there is. SRC is a common, widely used process that's accomplished transparently as standard by everyone else with modern processors and even free off-the-shelf algorithms/libraries. The freely available MatLab resampler code for example has artefacts down at around the -170dB level! So, it's not so much about having "
done the job properly", they'd pretty much have to deliberately screw it up to get audible artefacts. But again, my understanding does not go deep enough to be absolutely certain.
[1] There are multiple citations that they haven't, and [2] like you I want proof.
1. There are also multiple citations that silver cables sound brighter than copper cables, doesn't make it true though. There are numerous similar examples, which is why anecdotal evidence is considered amongst the very least reliable types of evidence. Even if the anecdotal evidence is correct, we still have to ask "when?". The probability of Android not performing audibly transparent SRC is far higher in early devices and early versions of Android.
2. Even if there is proof, it might not tell us very much. Android provides built-in default options for SRC but AFAIK, OEMs can override/replace that and provide their own SRC code. So proof that one model of smartphone running Android does or doesn't perform SRC transparently doesn't necessarily tell us if other models do. I suspect this is one of the reasons why we don't see any proof.
Having worked with Qualcomm, and measured in detail their attempts at audio (have you seen what they do to AAC? 14kHz brickwall bandwidth in many cases. Definately not transparent.) I side with them not being able to do a good job, or being bothered.
But the AAC format is supposed to have a low pass filter set at about 14kHz (below a certain bit rate). Additionally, as I'm sure you know, Android is developed by a consortium of dozens of companies. So how do you know that it was Qualcomm who developed the SRC code in Android? Even if it was Qualcomm AND they screwed it up big time, how is it that none of the other companies in the consortium noticed or did anything about it?
G