1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.

    Dismiss Notice

R2R/multibit vs Delta-Sigma - Is There A Measurable Scientific Difference That's Audible

Discussion in 'Sound Science' started by goodyfresh, Aug 31, 2015.
First
 
Back
71 72 73 74 75 76 77 78 79 80
82
Next
 
Last
  1. KeithEmo
    Errr... no.... I didn't specify my reference for both.
    (And, yes, I was being somewhat facetious.)

    If I were to play some music in my office, at a level that was "130 dB below 1 megawatt", the level would actually be somewhat low.
    And, if I ONLY played the noise, which was at a level of 130 dB below 1 megawatt, it would almost certainly be easily audible.
    (I didn't say I planned to play anything at the reference level - JUST the noise at "130 dB below 1 MW".)

    The point is that, just as some people insist on zooming their digital images millions of times, until the individual pixels are clearly visible...
    Some people turn their music up during the quiet spots - just to hear what the cellist is saying under her breath - or if someone really dropped a pencil between sets.
    And, if the noise floor does become audible at those times, differences in noise spectra that were previously inaudible often do become audible.
    (And there is a major difference between saying that "the difference doesn't matter to YOU" and saying that "it doesn't exist".)

     
  2. KeithEmo
    I wouldn't bet either way.....

    However, back in those days, explorers often left breeding pairs of pigs on islands they visited.... as a sort of "self stocking larder".... for when they eventually returned.
    Therefore, it's quite possible that there were some pigs on Krakatoa when it exploded.
    (In Tunguska I would more suspect something like the occasional caribou...)

    Since, as far as I know, nobody witnessed either from close up (and lived to tell about it)....
    I really don't know if either resulted in any flying wildlife or not....
    (Note that we also haven't been specifying whether a) the pig flies under his own power b) the pig is alive when he or she lands...)
    Therefore, apparently, it remains one of the not-so-great unknowns....

    Which is why I wouldn't be foolish enough to make any claim about it either way. :beerchug:

     
    Steve999 likes this.
  3. Dogmatrix
    There may have been trumpets , it's hard to tell as the talking bananas wont shut up .
    There is method to my madness bear with me .

    1.a Evidence accepted . I will go with the updated figures .
    I agree with @bigshot on jitter . I personally also have doubts the non pro listener could discern even relatively high levels without repeated comparison to a clean sample which of course is biased .

    2. Excellent advice I will narrow my focus .

    Many thanks .
     
  4. Dogmatrix
    Right on the money as usual .
     
  5. bigshot
    You missed your cue. You were supposed to ask him about his office system so he could give you a bunch of technical details to impress us all... or not.
     
  6. gregorio
    Oh good. So not only have we got a system in your office that doesn't exist but you're playing "some music" that doesn't actually have any music (only jitter noise at -130dB), a new album by John Lennon maybe? So under certain conditions, NONE of which exist, "it would almost certainly be easily audible", great, thanks for your contribution! Maybe in a parallel universe pigs can not only fly but play the violin while they're flying? Who knows, who cares and how is any of this even vaguely relevant or on topic?

    Given that the threshold for jitter with music/TV/Film sound was known a decade before digital audio was released to the public, even the first consumer digital audio devices (CD players) had jitter below audibility. The first professional ADC/DAC I bought in 1992 had jitter of about 80 pico-secs (if I remember correctly) and as far as I'm aware, even very cheap consumer DACs don't have jitter higher than a few hundred ps, which is at least a 100 times or so below the 200 nano-sec threshold. The "relatively high levels" we're talking about simply don't exist and never have (except of course when we've deliberately added it for test purposes). I'm not sure how/when this whole audiophile jitter myth started but it seems to be another example of taking something from the professional world of recording studios and fallaciously applying it to consumer playback:

    In the late 1990's/2000's jitter was (could be) somewhat of a problem in studios, as studios switched over from mainly analogue mixing and processing to more exclusively digital hardware units (digital recorders, digital mixers, digital samplers, digital reverbs and other effects, etc). As these were all digital hardware units, they each had their own internal clock, none of which would be in sync and the system would have so much jitter (timing error) as to be inoperable. In order to actually work in the first place, it was necessary to have a single clock signal and a master/slave relationship between all the digital hardware units in the system to distribute that clock signal and bypass/sync each unit's internal clock (except for the unit acting as the clock master of course). However, such an arrangement still resulted in significantly higher jitter and if there were a lot of digital units in the system, jitter could accumulate to audible levels, so fairly elaborate clock distribution systems were created to avoid this. Therefore, while jitter could be an issue in studios, it's not applicable to consumers (unless they've got a dozen or so digital hardware units chained together in their playback system!). Furthermore, even with music recording studios this potential jitter issue only existed for a relatively few years, as digital hardware units became obsolete in favour of virtual units (virtual mixing environments and software plugins), which of course do not have an internal hardware clock to sync/bypass and therefore could not exhibit this jitter issue.

    G
     
    SilentNote likes this.
  7. Dogmatrix
    Thanks for the insight . I am satisfied I can rule out jitter now . Interesting they (RME for one) are marketing dacs in terms of "Femto clock" .
     
  8. KeithEmo
    I must have missed the stone tablets where that threshold was inscribed. I do think you're mis-remembering. I recall a popular hardware sample-rate-converter from the early 1990's that was quite proud of their claim of "only 90 NANOSECONDS of jitter", which they claimed to be exceptionally good for the day. Very few pieces of modern equipment, other than expensive audiophile gear, speficy jitter.... although, among those that do, claims of amounts below 100 picoseconds, and even a single picosecond, are sometimes found. A lot of so-called audiophile gear claims to incorporate clock chips with jitter specs of one picosecond or better, or even as low as a few hundred femtoseconds, but they rarely if ever specify the actual jitter present on the data as delivered at their output. This is an issue because thye amount of jitter at the output is often several orders of magnitude worse. Obviously there is some level below which jitter has no audible effect - but I believe that, as with many things, not everyone agrees with what that level might be.

    I believe the so-called "myth" started when people started experimenting with devices that specifically reduce jitter and claimed to hear an audible difference.
    There is a lively market in devices that reduce or eliminate jitter, both in DACs, and as separate little black boxes... and many modern DACs incorporate such devices internally.
    (For example, one of the main benefits of asynchronous USB inputs is to reduce jitter, and many DACs include an ASRC to reduce jitter by resampling and reclocking the data.)

    You're probably also aware that, in the context of this thread, one of the claimed benefits of R2R DACs is that they are less sensitive to jitter. In theory, this should be even more true for NOS R2R DACs, if you ignore their other shortcomings. For a given amount of absolute jitter, added to the signal reaching the DAC chip itself, a "typical" R2R DAC will supposedly experience fewer and lower distortion sidebands as a result than a typical D-S DAC. It seems that, since the amount of distortion caused by a certain amount of jitter is determined by the relationship between the amount of jitter and the clock period, and the internal architecture of a D-S DAC operates on a much faster clock, the same amount of jitter in absolute terms will result in a higher percentage of jitter in proportion to the clock period on a D-S DAC, and so a higher level of distortion. (I haven't checked all the math there - but it seems to make sense - and so may well be worth a few experiments. However, these days, virtually nobody actually tests how various DACs respond when jitter is present on their input signal in any detail.)

    However, there is a good case to be made that you won't achieve much benefit to having playback equipment with jitter that's much lower than the amount of jitter on the equipment your content was recorded on or converted with. So, in order to perform a valid test to determine whether 20 nanoseconds of jitter is audible, you want to start with content samples that were recorded on equipment with jitter levels that are at least 5x lower than that... However, oddly, in the only early test I've seen published, they neglected to document where their test samples came from in any detail. I hope they weren't sourced from tape masters - because the analog jitter (wow and flutter) on tape is far worse than that. (You cannot reach valid conclusions about what effect adding jitter will have unless you start with a known amount as a reference.)

    I should also point out that both commercial product manufacturers and recording engineers do deserve a lot of the credit for the current distrust common to audiophiles.
    The reality is that many early digital recordings, including many early CDs, did NOT sound good at all... even though the recording industry insisted that they were better.
    As a result, many audiophiles simply learned to distrust claims by both the audio product industry and the recording industry about such things.
    The result was a serious credibility gap - which still exists.
    (We here all know that Red Book CDs can sound really good.... but the fact remains that many early examples did not.... for various reasons.)

     
  9. bigshot
    You should make a list of the exaggerations and lies you uncover. Just make sure you have enough yellow pads on hand.

    sidenote

    A patently absurd claim doesn't have to have thresholds engraved on stone tablets. Anyone who knows what -130dB is knows you can't hear it. That is like knowing you can't bicycle to the moon, or you can't roller skate in a buffalo herd.

    I didn't read past the first sentence. It's getting really easy to blow through his posts lately.
     
    Last edited: Aug 28, 2019
  10. sonitus mirus
    It is just marketing. Check out what Matthias Carstens at RME had to say about it.

    https://www.forum.rme-audio.de/viewtopic.php?id=26677

    "Still we will not enter this useless number throwing game. SteadyClock FS uses a 'femtosecond' clock (marketing hooray) and has less self-jitter than the former version (marketing not excited)."
     
  11. Dogmatrix
    Perhaps an example of the eternal struggle between engineering , design and marketing . Although in my experience accounting generally wins .

    Full disclosure . An ADI-2 PRO by RME (former high self-jitter version) has formed the hub of my Head-Fi wheel for some time .

    For the casual observer

    Microsecond definition
    A microsecond is a unit of time equal to one millionth of a second. It is also equal to one 1000th of a millisecond, or 1000 nanoseconds.

    Nanosecond, definition
    A nanosecond (ns) is an SI unit of time equal to one thousand-millionth of a second (or one billionth of a second), that is, 1/1,000,000,000 of a second, or 10−9 seconds. The term combines the prefix nano- with the basic unit for one-sixtieth of a minute. A nanosecond is equal to 1000 picoseconds or 1⁄1000 microsecond.

    Picosecond definition
    A picosecond is one trillionth (10 -12 ) of a second, or one millionth of a microsecond. For comparison, a millisecond (ms or msec) is one thousandth of a second

    Femtosecond definition
    Femtosecond. A femtosecond is the SI unit of time equal to 10 −15 or 1 / 1,000,000,000,000,000 of a second; that is, one quadrillionth, or one millionth of one billionth, of a second. For context, a femtosecond is to a second as a second is to about 31.71 million years; a ray of light travels approximately 0.3 µm (micrometers)…
     
    Last edited: Aug 29, 2019
    Baldr likes this.
  12. SilentNote
    But I’m Flash. When I travel at light speed, femtoseconds are too long (and audible).
     
  13. bigshot
    And we watch movies with a sampling rate of a 24th of a second.
     
  14. KeithEmo
    You missed Picoseconds.....
    A picosecond is 1/1000 of a nanosecond.

    That is the term you're most likely to encounter these days.
    (Many current products designed to reduce jitter claim specs between a few picoseconds and a few hundred picoseconds... whether you believe it matters or not.)

    And, yes, we are talking about very small numbers here.

    However, in this context, the actual size of the numbers, and the values they represent, are irrelevant to the discussion.
    In the context of our discussion, all that matters is how those numbers affect the audio signal we're listening to.
    A 1 picosecond error in the time of when a song starts playing would be inconsequential.
    However, a 1 picosecond error in the timing of when a missile is launched could mean you miss your target, or hit the wrong target entirely.
    In the case of a particular DAC - all that matters is how much distortion will occur as a result of that error.
    (And it is not at all valid to assume that, because 1 picosecond is a really tiny interval of time, it will only cause a really tiny amount of distortion.)

    In the case of a DAC, the output voltage is determined by both the data itself, and the time at which it arrives.
    Potential errors in amplitude include linearity errors, offset errors, and just plain data errors.
    Most errors in the accuracy of the clock, other than very long term speed variations, which are exceptionally rare these days, are lumped under the single label of "jitter".
    And the amount and type of distortion a given amount and type of jitter will cause depends on both the content being converted and the architecture of the DAC itself.
    (The same amount of jitter could cause very different amounts or types of distortion when fed into different DACs.)

    However, the linked article mentions a very important point, which I also alluded to.

    Many DAC manufacturers like to brag about their use of exceptionally accurate clocks with very low jitter.
    DACs use a variety of different types of clock circuits - and many use a modular "clock chip" that is simply purchased from an outside vendor.
    And, as you might expect, the cost of a clock chip or circuit is generally proportional to its specifications...
    Many high-end DACs these days use clock chips with jitter specs at or below one picosecond.
    ("Femto-clock" is simply a marketing term for "a clock whose jitter is measured in femtoseconds" - in other words "a really small amount".)

    The catch is that the amount of jitter in other parts of the circuit depends more on the design of the rest of the circuitry than on the clock itself.
    In most situations, the amount of jitter created by the clock chip sets the lower limit, and the amount of jitter increases as the clock passes through more circuitry.
    (The amount of jitter can increase significantly even after simply passing through a few inches of circuit board trace - if that trace is poorly laid out.)
    As a result, the fact that a certain DAC uses "a clock with less than 1 picosecond of jitter" does not indicate how much jitter will be present at other points in the circuit.
    And, as the author of that article said, in terms of distortion in the analog signal, ALL that matters is the amount of jitter that directly affects the conversion process.
    This includes the actual jitter present in the data when it reaches the DAC chip... and the jitter present on any clocks directly used in the conversion process.
    (So a DAC that uses "a femto-clock" COULD have very low levels of jitter where jitter is critical... but is actually not guaranteed to... that depends on the rest of the design.)

    From a purely practical (and marketing) perspective....
    Actually measuring very low levels of jitter is very difficult and requires very expensive equipment.
    (For example, actually measuring a few picoseconds of jitter on the input pin of the DAC chip in your product.)
    But reprinting an impressive specification from the spec sheet on the brand and part number of clock chip your product incorporates is simple.
    (So, as a manufacturer, you spend a few extra $$$ for a clock chip with a spec of "jitter < 1 picosecond", and then you get to brag about it.)
    (And, yes, if next year's model uses a slightly more expensive chip, with better specs, they are going to brag about that too.)

     
    Last edited: Aug 29, 2019
    Dogmatrix likes this.
  15. bigshot
    In the real world, jitter is an irrelevant spec. It's pure sales pitch. I seriously doubt if there's ever been any home audio component with audible jitter. Only salesmen and suckers care about jitter ratings.
     
First
 
Back
71 72 73 74 75 76 77 78 79 80
82
Next
 
Last

Share This Page