1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.

    Dismiss Notice

Testing audiophile claims and myths

Discussion in 'Sound Science' started by prog rock man, May 3, 2010.
822 823 824 825 826 827 828 829 830 831
833 834 835 836 837 838 839 840 841 842
  1. dprimary
    In the early 80's what other options did they have? U-Matic was a robust broadcast standard at the time. An U-matic tape machine was not exactly cheap, about the price of a luxury car at the time. and the tape A 1 gig hard drive was 550 pounds and cost around $90,000. Chances are if you shipped it you would lose the data. My first 1 Gig drive cost $1100 in the mid 90's. Within a few years you could get ten time storage at one third the price With it could edit 1 album I would transfer it in real time off of DAT edit it and transfer it back.
  2. castleofargh Contributor
    isn't a signal going above the original limit just evidence of non linear behavior? to make things worst, this will manifest differently at different frequencies(for purely mechanical reasons). to me your argument is like saying that we can get 130dB of dynamic on a CD if we just include an expander DSP as standard feature. it's not false, but the concept of fidelity goes out the window.
  3. KeithEmo
    You're quite right.... using a 1 gB hard drive to ship content for a CD would have been impractical back in those days.

    The practical solution would have been to use the same workflow as they used for vinyl...
    Ship the master tape to the location of the mastering lathe (or CD mastering equipment) and convert it there.
    In fact, you can pretty well substitute "CD mastering device" for "record mastering lathe" in the description, and you're done.

    There's something else that really needs to be mentioned.
    It is true that the U-Matic recorders used by studios in the 1980's were designed to work at certain sample rates.
    That is because they were designed to store a specific amount of digital audio data per-line and per-frame as digital audio.
    HOWEVER, tape drives were available at that time that were designed to record and store computer data.
    Those drives were still expensive - but less so than hard drives.
    (It would still have made the most sense to send the analog master tape to the CD mastering facility for conversion.)

    Performing the conversion at the studio - and sending a U-Matic tape to the CD mastering plant - was surely the most convenient way...
    But sending an analog master tape instead would not have been prohibitively complex or expensive.

  4. analogsurviver
    Oh dear.... 0 dB in case of analog record is REFERENCE - most definitely NOT LIMIT.
    There is no cartridge that can track the greatest amplitude any cutter system in use today can put on master - that is even above +18 and +20 dB ref 0 dB - or 3.54 cm/sec @ 1kHz .
    And all of that is LINEAR - cartridge does not compress the signal, it can only mistrack at levels above its tracking capability.

    I actually REALLY suggest you read the basics of record mastering/cutting. World did not start with zeroes and ones.
    Last edited: Feb 4, 2019
  5. dprimary
    We did use the same workflow. You brought the stereo masters to the mastering studio. The mastering engineer would give it the final polish to maximize fidelity for each of the release formats. After the CD introduction you would do the 1630 master first as it was the highest fidelity. The 1630 encoded on the U-matic tape. I cannot find any data tape format in the early 80's that could store this amount of data. The IBM 3480 only stored 200MB, and didn't come out to1985... 5 years too late. DEC was even smaller. The DLT could store it in 1989 however it has hardware data compression.

    The cassette master be made by rolling off the low frequencies and compressing the dynamic range it would 1/4" open reel for some reason I think it might have been recorded at lower speeds like 7-1/2 IPS.

    Then you would cut the lacquers 7", LP and EP. The 12" EP having the highest fidelity since they are 45 RPM and wider grooves. The helium cooled Neumann cutting heads had a response to 16kHz and later model to 20 kHz. Ortophon had one that could go to 25kHz however it was not as robust as the Neumanns, It is likely there is no working Ortophons left. The cut the lacquers the master LF rolled off and summed to mono below 100 Hz. You can cut up to 50 kHz running at half speed however the trade off is even worse low frequency response. You have to compress the dynamic range and you have hard limiters to keep from cutting to wide or deep and don't forget it is eq'ed for RIAA pre-emphasis curve or IEC Curve which are not the same. Recording for LP release or CD release changes your production style you can't expect a LP to perform like a CD. For example on records you duck the bass level off the kick drum beats, you can't have that much LF and keep it in the groove.

    You have a heated stylus cutting the lacquer driven by helium cooled drive coils that has as much 500 watts each going into them. The lacquer that chips off as it is cut is extremely flammable, if the chips get back on the lacquer it is ruined. If you want to hear what is might sound like you have to cut an acetate which can only be played a few times, and you hope the lacquer sounds the same. The lacquer is sent to the pressing plant where it is plated to make the first plate. This process destroys the lacquer. Pretty much the Rube Goldberg of audio. All that work and it doesn't sound anything like the 1/2" analog stereo master you started with. The 1/2" master doesn't sound anything like the mix from the console off the 2" 24 track, which in turn barely resembles what came out of the microphones.
    PhonoPhi and gregorio like this.
  6. gregorio
    1. No, the fly is in YOUR ointment ... and it's a very big fly! However, it's not "the usual mistake" because there is NO professional recording engineer who would EVER make that mistake, which is why I'm calling BS on your claim of being a recording engineer!!!!

    2. And here we have it, the mistake that not even a rookie student should make: You are comparing two DIFFERENT and UNRELATED dB scales: the dBFS scale of digital audio with the dBVU scale of analogue. There is/was no standard in music recording for aligning these two scales but in film and TV there is: -20dBFS is calibrated to 0dBVU. Therefore, your +18dB(VU) would be roughly equivalent to -2dB(FS) and consequently, the rest of your post is utter nonsense! The SNR of vinyl is NOT 60dB +18dB, there is no +18dB! Not to mention that peak or quasi-peak level has NOTHING to do with signal to noise ratio anyway.

    3. What truth are you telling, the truth based on you not understanding the fundamental basics of the scales used for measuring signal amplitude in analogue and digital recordings?
    3a. So you're saying that only a handful of cartridges would be able to play back a level of -2dB(FS) without severe distortion. As digital can go up to 0dBFS (2dB higher than vinyl) with no audible distortion at all, how is vinyl better than digital? Are you saying that severe distortion is "better" than no audible distortion?

    Unfortunately, even in this sub-forum, there is no rule against posting utter ignorant nonsense and claiming that it's the "truth". So we have no grounds to call for you to be banned, the best we can do is demonstrate/explain that it is utter ignorant nonsense!

  7. dprimary
    0dBu is about -18 dB down from 0dBFS depending on your calibration
  8. castleofargh Contributor
    all right my bad, I misunderstood your post. I thought you were considering playback amplitudes compared to the max amplitude from the source that would be our reference. I imagined that was where you saw your extra dynamic. now I'm wondering what you meant? is it about being able to record with a VU meter above 0? if so that's utterly inconsequential as you can just change your reference, the total actual dynamic remains the same. or am I again misunderstanding what you're talking about?
  9. analogsurviver
    I just saw @dprimary actually has experience with analog record cutting.

    Just above he said ( quote ) :

    0dBu is about -18 dB down from 0dBFS depending on your calibration

    That covers the recording levels for normal cutting. Normal record maximum S/N is therefore 60 dB ref 0dBu - overall, with the maximum level of +18dB, that S/N amounts to 78 dB.

    Absolute levels can remain (in theory) the same for noise reduction encoded records. Add about 20 dB to that 78 already present - voila, 98 dB S/N .

    For the CX, CBS circuit from the patent hard clips at just above +15dBu - meaning that CX encoded records cut optimally are unfortunately distorted/compressed in the last 3 dB of dynamic range. The ICs readily available in USA at the time did not allow for grater voltage amplitude required - and they - obviously - did not either know of or did not want to use japanese ICs that , even back then, have been specifically developed for phono preamps, running at more than +-15 VDC rails.
  10. analogsurviver
    Sounds about correct.

    I am happy to record 2 channel direct to DSD128 - next to none difference, still want to close the gap between the mike feed and recording ...
  11. dprimary
    Unfortunately manufacturers still get this wrong as well. I expect analog line level inputs to handle at least +24dBu. From what I have measured years ago 0 dBFS is over 15 volts peak to peak. They look up the output of a consumer CD player which specifies some nonsense like out 2V nominal ( which nobody knows what that is suppose to mean) Hit that input with a professional CD player or digital console and you have a distortion wonderland. At least consoles you can reduce the master fader, with a CD player if you don't have 20dB pad your stuck.
  12. Don Hills
    As an aside, do you know why the amplifiers had to be capable of so much output?
  13. analogsurviver
    To be able to cut the high frequency information correctly - uncompressed, not bandwidth limited. In fact, the calculated requirement to cut at 100 cm/sec ( above "theorethical", below actual recorded velocities from real world top quality records ) goes up to just below 1 kW - per channel - meaning 2 kW peak power is required to drive the cutting stylus.


    It takes great skill - and cojones - to work with the cutting head at the very upper limit - exceed that, the coils burn out... Back in the day, it meant a big check to Neumann or Ortofon - for the replacement head ( if you wanted it ASAP ) or not that much lower check if you wanted to wait to have the damaged head repaired..

    Today, neither the Neumann or Ortofon are no longer supporting cutting heads - or record mastering equipment in general. there are none NOS cutting heads anymore - anywhere in the world. Some of the former employees not old too much to still be able to support the market have taken over this task. And they are overbooked like crazy....

    No wonder today record mastering engineers have to be more conservative in their approach to record mastering - burning the cutter head can mean 6 months (!) or more waiting time, before the repaired head comes back. Therefore they use compression, peak limiteing, frequency response limiting, - any other sound quality degrading unmentionables of the trade - that will, first and foremost, make sure the cutting head is NEVER jeopardized.

    No one can afford half a year doing - nothing ...

    Half speed mastering has the benefit of requiring only 1/4 of the power for mastering in real time. That means 250 W/ch amps will suffice - and with any disc cutting amp, that means loafing..Also, no danger to the cutting head - nor requirement to use helium for cooling.
    Drawback is problematic low end - requiring the entire chain to work flawlessly down to at least 10 Hz, which proved to be too tough nut to crack. Even the pioneer of half speed mastering, sadly late Stan Ricker resorted to 2/3rd speed mastering in the end - the best compromise under the given circumstances.

    More about mastering is nowhere better explained than here ( there are 3 parts ) : http://www.enjoythemusic.com/magazine/rickerinterview/ricker1.htm
    Last edited: Feb 4, 2019
  14. gregorio
    1. Yes, and that does vary with music, there really isn't a standard. I've seen prosumer ADCs with a fixed calibration at -14dBFS = 0dBVU = +4dBu = 1.228 volts. However, others are -16dBFS or -18dBFS (= +4dBu = 0dBVU) and the higher end professional converters used by commercial studios have adjustable calibration, which I've seen calibrated as low as -22dBFS (which is -26dBFS = 0dBu), that was used for a classical music recording, although that's unusual in my experience. As mentioned, in TV/Film there is a fixed standard, -24dBFS = 0dBu (specifically: -20dBFS = 0dBVU = +4dBu = 1.228v). Therefore, analogsurviver's assertion is nonsense, digital can (and always does) exceed 0dB(u), typically by at least +18dB but up to as much as +26dB.

    You seem to know what you're talking about but for others (PARTICULARLY @analogsurviver!) here's a basic primer on the subject: http://www.lavryengineering.com/wiki/index.php/DB

    2. While manufacturers do get this wrong, there is also the fact that we're not only dealing with two different scales for digital and analogue but also different scales just for analogue! We have the pro analogue "line level" (+4dBu = 1.228v) and consumer "line level" (which is -10dBV, where 0dbV = 1 volt), two different scales with yet again different 0dB reference voltages. Pro analogue line level works out to be 11.8v more than consumer line level. As you say though, consumer equipment manufacturers don't necessarily stick to consumer line level (nominal 1v output). Then of course we've got the dBVU scale, which is what we have on the meters of most pro audio analogue equipment. This is all rather confusing for the uninitiated but it's basic audio 101 for student recording engineers and analogsurviver has claimed to be a professional recording engineer?!!

  15. Don Hills
    You imply that you have a "state of the art" LP reproduction system. Do you also have the means (an ADC) to record from this system? It doesn't have to be better than RBCD quality. What I'd really like is a minute or so of the 1 KHz track off a test record. It's for the "party trick" I used to do back in the 80s(*). After the host had finished demonstrating their high-end turntable setup, I would play the tone from a test record and then from a test CD. Then I would point out that the imperfections audible on the LP (ignoring any ticks and pops) were present in equal measure in all music played on that turntable even if you couldn't hear them (for good psychoacoustic reasons).
    The best turntables could perform this test quite well, especially for the first few seconds. But after a little time listening they ultimately failed to achieve the "solidity" of the CD tone. A common comment was that the LP sounded more "life like". This was understandable, the sound from some human-played instruments (e.g. flute) often exhibits the same sorts of micro variations in pitch and level. The point is, of course, that the 1 KHz test tone should not exhibit such variations.

    (*) At the time, I had no way to make a good digital recording of the turntable output. Now I have the means but no good turntable...

    Note that this isn't a trick or a setup against you. I'd like it to show to people who, unlike you as quoted above, refuse to "acknowledge the limitations and imperfections of analog record vs even - brrr - RBCD." It's also an easy test for any vinylphile with a test LP to perform for themselves. It doesn't require exact level matching or blind protocols, the differences are clear.
    bfreedma and gregorio like this.
822 823 824 825 826 827 828 829 830 831
833 834 835 836 837 838 839 840 841 842

Share This Page