1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.

    Dismiss Notice

Why would 24 bit / 192 khz flac sound any better than 16 bit / 44.1 khz flac if both are lossless (if at all)?

Discussion in 'Sound Science' started by thesuperguy, Mar 15, 2014.
First
 
Back
16 17 18 19 20 21 22 23 24 25
  1. old tech
    And I always believed that clocking inaccuracies is more likely with external clocks...
     
    As for the old jitter chestnut, is that really an issue today?  Was it ever an issue outside the dawn of digital audio for most well implemented converters?
     
    I am always amused by those claiming to hear jitter as they usually are the same people espousing the virtues of analog equipment, but seem deaf to the audible issues of wow, flutter, rumble etc.
     
    LajostheHun likes this.
  2. Arpiben

    I have to agree with you in the sense that I didn't find papers addressing the jitter's audibility thresholds.
    As a starting point I have J.Dunn's value of 100ns for below 100Hz....
    Thanks for your point.
     
  3. Arpiben
    [@] [/@]les[/@]

    Gregorio's post and link are showing some drawbacks of external reference clocks as well as their utilities when dealing with big chains/numbers of ADCs, DACs,etc...

    Summarising, I am assuming when dealing with one or a small number of ADCs you will have no benefits in using Cesium/Rubidium clocks.
    When dealing with a certain amount of clock recovered equipments there is need if not must for such external references.

    In Telecom networks the recovered clock need to be regenerated roughly after every 20 equipments.
    Otherwise you are degrading the Wander (Jitter below 10Hz)

    I am assuming a kind of similar behavior when dealing with Audio.

    Now remains the question of audibility of those issues as per nick_Charles's post.
     
  4. gregorio
     
    1. Starting about 20 years ago, there was a big thing in the pro audio community about external clocking, largely due to the misleading marketing claims by one of the most respected high-end manufacturers of pro ADCs at the time. The whole thing culminated into probably the most infamous public argument in the history of the pro audio community, as it was extremely acrimonious and involved a considerable number of the biggest/most influential names in the business (Bob Katz, Bob Ohlsson, Dan Lavry, Nika Aldrich and a whole slew of others). That aside, here are the basic rules of studio/pro clocking:
     
    Regardless of how good/expensive, an external clock will not improve the performance of a pro ADC. An external clock will degrade the performance of an ADC or in the very best case scenario not make any difference. When linking digital audio equipment together (say more than one ADC, a digital mixer, etc.), a master-clock source is absolutely required, otherwise the system simply won't work. In general, the best source for this distributed master-clock would again be the internal clock of the ADC. There are some exceptions however, some complex studio topologies and most scenarios where audio and video require synchronisation for example. In these cases an external master-clock maybe beneficial or even unavoidable but even so, there is no advantage with those ridiculously expensive clocks over far cheaper alternatives because the clock signals they produce are never directly used by ADCs. Even in this scenario, the ADC is still using it's internal clock, which in effect is regenerating the external clock source. The determining factor is the quality of the clock recovery/regeneration topology of the ADC not the accuracy of the external clock. A decent $200 external master-clock will end up with a regenerated clock signal in the ADC which is no less (or more) accurate than if the external master-clock were a $10k+ atomic clock!
     
    2. The only published study figures I've seen, indicates a threshold of audibility (with music material rather than test signals) of 20ns. I can't remember where I saw it though and I also believe it's been disputed (claiming the figure should be significantly higher). Even accepting this 20ns figure as the audibility threshold, that's still several hundred times more jitter than modern pro ADCs produce!
     
     
    Oh dear! Are you really claiming to have been taught by the legendary mastering engineer Bob Ludwig? If so, I'm going to call you out on that lie too! Bob Ludwig knows his stuff and while I can't quote him directly, I can quote him indirectly (from another legendary mastering engineer, Bob Ohlsson) from over a dozen years ago:
     
    "Bob Ludwig made it very clear during his workshop the other day at AES that what sounds best to him in his room is using the internal clock of an A to D and, for playback only, clocking the entire system off the internal clock of the D to A converter. He spoke very highly of using the Big Ben for locking to video but never suggested he uses it in place of an internal clock when that's possible."
     
    It's not plausible that you even have a college level education in digital audio, let alone be personally "drilled" by one of the industry's greats. Why do you persist in these lies, especially such obvious lies? I can't see how being exposed as a persistent liar is of any benefit to you and your lies are certainly of no benefit to anyone here, so for you own (and everyone else's) sake, please STOP!
     
    G
     
  5. castleofargh Contributor
    my very limited understanding about external clocks aligns with gregorio. to me it's a tool to match different devices when timing is important, and that's it. not a way to "upgrade" the ADC clock.
     
    about jitter, there are all sorts of values that came out from more or less serious studies, and it's to be expected as jitter isn't only one constant thing with one single cause. most would manufacture the jitter and/or the test signal; making it a special case in a special case. with musical content, people have a lot less sensitivity to jitter. well they have a lot less sensitivity to everything ^_^.
     
     
    now the modo part.
    drtechno's post didn't claim that he worked or learned directly under those guys. so as long as he doesn't explicitly say so, I don't appreciate to see him treated as a liar. at least not over something unclear like this. again, argue the claims, not the people.
    about the external clock, trying to play devil's advocate here, is it possible that Ludwig changed his mind over the years on the matter? only fools never change their mind. or that doc got his information from a bad source? or maybe he mixed up his memories(I have a few personal anecdotes on the matter that nobody cares about).
     
  6. spruce music
    https://www.researchgate.net/publication/242508896_Detection_threshold_for_distortions_due_to_jitter_on_digital_audio
     
    You can get the entire article done in 2005 here.  This one using music, and 2AFC blind testing.  Some people reliably detected 500 nseconds of random jitter.  Most couldn't do that.  No one could detect 250 nanoseconds of random jitter.
     
    I believe Eric Benjamin and Benjamin Gannon working for Dolby Labs got different results with a different methodology.  They used non-random jitter at something like. 1500 hz and 1800 hz. Non-random jitter would of course be more audible.  Using high level test tones at 10 nanoseconds on 17 khz  tones some people heard it (jitter makes more difference on higher frequencies).
     
    Jitter of 121 picoseconds of a 20khz tone at maximum level could alter the LSB for a redbook format.  Doesn't mean it would be audible, but it changes the sample value.  Lower frequencies are proportionally less effected.  Lower level tones also would need more jitter to alter the LSB. So lower frequency lower level sounds needs lots of jitter to even change the sample values much less change them enough to be heard.
     
    Very inexpensive consumer gear typically has jitter levels below 500 picoseconds.  Jitter just isn't a problem. 
     
  7. gregorio
     
    1. That's my understanding. More importantly though, it's the understanding of the industry. For at least 15 years or so, most/all pro ADC manufacturers have been attenuating jitter noise in the higher frequency band.
     
    2. I don't know much about the jitter specs of inexpensive consumer gear. Pro audio ADCs typically produce less than 100ps of jitter and figures near half of that are not at all uncommon.
     


     
    1. He did explicitly say so: "Yes clocking is very important as that was drilled into my head by Mr. Ludwig and Mr. Williams about 20 years ago to use an external clock." - "drilled into my head" means to repeat the information numerous times until understanding/acceptance is certain. Drilling something into someone's head takes time and persistence and he is clear that this "drilling" was done to him by Ludwig (and Williams). drtechno's statement was about as unambiguous as I can imagine.
     
    2. It's certainly possible that Ludwig changed his mind on the issue. However, if that is the case it would have to have been in response to some change which occurred in the manufacture of pro ADCs. It is possible that an external clock could improve the performance of an ADC but the ADC would need to have a very poor quality internal clock, a high quality clock recovery mechanism and both internal and external clock sources would need to be routed through that recovery mechanism. I know of no pro ADCs which fulfil these requirements since I've been in the business, although maybe some did before then, which could account for a change of mind (if there ever were one) but that would be outside the time-frame described by drtechno! It is possible, though very unlikely, that Ludwig was at one time simply mistaken but I find that very hard to believe.
    2b. No. It's information posted by Bob Olhsson personally (another of the most respected/influential mastering engineers) in 2004.
    2c. Bob Ohlsson was posting about an AES event he was present at, just a few days prior to his post, so the memories were pretty fresh.
     
    G
     
  8. Arpiben
    1. Jitter makes more difference on high frequencies

    True by definition of jitter when dealing with ADC/DAC....
    Jitter is dealing with f>10Hz.
    Therefore Jitter is characterizing short time variations t<0.1s.
    By reducing Jitter at high frequencies you are improving sampling rate accuracy.
    ADC/DAC need short term stability from clocks.

    On the other hand, Wander (Jitter f<10Hz) is characterizing long term stability.
    In this case Wander low frequencies make more difference.

    Atomic clocks have long term stability, here window time is counted in days ( around 200ns/3days)
    Short term jitter is around 1ns/1s window.

    Edited:
    As mentioned by spruce music and by gregorio , with proper ADC's clocks, Jitter falls around picoseconds values.
    On the contrary, even with low phase noise (Jitter) external Atomic clocks you destroy the benefits by using cables and PLLs down to your sampling frequencies.
     
  9. castleofargh Contributor
    /!\ this is casual audio talk from a distant future /!\
     
    I wish I could listen to music the way the artist intended, but atomic clocks have so much jitter, the trebles just feel artificial.
    portable devices will never sound as good as a home system, it's pretty obvious that when I'm altering the flow of time around me by walking down the street, my DAP and my head don't follow the same movement, that's sure to create jitter.
    real audiophiles put the playback system at the same level as their head. I couldn't stand the pitch error from the variation of altitude between my head and my desk.

     
    headdict likes this.
  10. drtechno
     
    The only issue I ever had with an atomic clock and a trinity clock divider was an install issue that I solved was adding a clock distribution amp. The signal out of the trinity is 3V p-p and the converters in that installation was 5v clock and it had the jitters until I put the distro amp in service. Jitter is not really a big issue on newer converters as there has been advancements in that part of the technology. However, I'm talking about the $2200-$16000 type that have advanced (high end tracking and mastering class).
     
  11. Arpiben
    When dealing with clock distribution (Atomic, OCXO or whatever), you need to avoid:
    1. any loop possibilities,
    2. any excessive cable length ( around 5m with 75Ohms coaxial)
     
    I have no experience at all with clocking in Audio industry. But I am quite familiar with it in Telecommunication domain: Microwave radio, Satellites & IP networks.   
     
    Thanks to @spruce music & @gregorio I received my missing clues about audible jitter & atomic clocking.
    Thanks to @drtechno I had the curiosity to dig inside pro audio industry technical specifications of such equipments.
     
    I was quite surprised not to find any relevant informartion regarding what really matters:
    1. output user clocks: 44.1kHz / 48 kHz / 192 kHz etc...
     
    Instead I found, IMHO, lots of non useful data or not understood ones at f=10 MHz:
    1. phase noise / low phase noise
    2. short term stability without mention of  Allan Deviation or variant ones ( TDEV,MDEV,etc)
    3. long term stability without mention of MTIE ( Max Time Interval Error)
    4. aging, drift, etc...
     
    As a reminder Phase Noise (db/Hz) and Jitter (db/s) are equivalent, one is measured in frequential domain when Jitter is expressed in time domain.
     
    Anyway,relevant information is also missing in lots of other audio(or non audio) fields: amplifiers, etc...[​IMG]
     
    Thanks.
     
First
 
Back
16 17 18 19 20 21 22 23 24 25

Share This Page