No peer-accepted electrical engineering authority has ever claimed that jitter below 200ps was audible and even this is conservative by several orders of magnitude. Through mathematical analysis we can prove that jitter below 100ps can't have any effect on 16bit/44.1KHz samples within the commonly accepted physical realities of our universe. No one that I know of has ever ABXed jitter in digital audio when the jitter was on the order of single-digit nanoseconds. Most commonly acceptable thresholds for jitter detection by human observers are on the orders of 10s to 100s of nanoseconds (not counting John Atkinson of course, who can detect single picosecond jitter without even connecting the transport to anything). Note that 10s to 100s of nanoseconds of jitter is 3 to 4 orders of magnitude worse than what the SB3 is producing.
Secondly according to Pohlmann jitter can go to 200ps before it degrades 16 bit audio to 15 bit resolution.
As for empirical audibility , this seems to depend on the type of jitter, the Ashihara paper (I have cited elsewhere) used random jitter and found this inaudible until 500ns. The Benjamin and Gannon paper used deterministic jitter which caused definite sidebands, here the worst case for audibility in music was 30ns , 10ns with a high frequency pure single tone.
Is jitter normally a problem, I remain skeptical, the only evidence for ps jitter audibility is poorly controlled and anecdotal tests.