KeithEmo
Member of the Trade: Emotiva
- Joined
- Aug 13, 2014
- Posts
- 1,698
- Likes
- 868
According to Philips, they were the only company to INITIALLY use oversampling in their CD players.
However, as per their description, it was quickly adopted by everyone else.
It was never part of the standard or specified by the standard.
So, at best, by choosing a sample rate of 44.1k, they created a practical design problem for which they already had a solution in mind.
(This isn't especially terrible... and one might even suggest that Philips strategically wanted everyone else to be "playing catch-up".)
However, my point remains....
It is generally not a good idea to create a standard in such a way that it is likely to be implemented poorly in commercial products designed using current technology.
It's a sort of "recipe for disaster" if a bunch of commercial products are released that claim to support your standard but don't actually work very well.
(You simply end up with a public perception that your standard doesn't work very well.... note how many audiophiles complained about "the poor sound quality of early CDs".)
To quote Phillips:
"However Philips’ oversampling technology, originally born out of the necessity to use the early 14 bit D/A converters, and dismissed as a 'technical joke’ by other manufacturers who believed that a true 16 bit D/A converter followed by a steep analogue filter was the only way to go, was quickly embraced by most manufacturers of CD players. Because it meant there was no need to use highly complex analogue filters, while at the same time it allowed the often serious non-linearities of the D/A converters that were available at the time to be concealed."
( https://www.philips.com/a-w/research/technologies/cd/technology.html ).
I would also be curious to know how many of the early studio A/D converters included oversampling or not.
(I have no familiarity with any of the early ones.)
However, as per their description, it was quickly adopted by everyone else.
It was never part of the standard or specified by the standard.
So, at best, by choosing a sample rate of 44.1k, they created a practical design problem for which they already had a solution in mind.
(This isn't especially terrible... and one might even suggest that Philips strategically wanted everyone else to be "playing catch-up".)
However, my point remains....
It is generally not a good idea to create a standard in such a way that it is likely to be implemented poorly in commercial products designed using current technology.
It's a sort of "recipe for disaster" if a bunch of commercial products are released that claim to support your standard but don't actually work very well.
(You simply end up with a public perception that your standard doesn't work very well.... note how many audiophiles complained about "the poor sound quality of early CDs".)
To quote Phillips:
"However Philips’ oversampling technology, originally born out of the necessity to use the early 14 bit D/A converters, and dismissed as a 'technical joke’ by other manufacturers who believed that a true 16 bit D/A converter followed by a steep analogue filter was the only way to go, was quickly embraced by most manufacturers of CD players. Because it meant there was no need to use highly complex analogue filters, while at the same time it allowed the often serious non-linearities of the D/A converters that were available at the time to be concealed."
( https://www.philips.com/a-w/research/technologies/cd/technology.html ).
I would also be curious to know how many of the early studio A/D converters included oversampling or not.
(I have no familiarity with any of the early ones.)
1. Unfortunately, yet another typical KeithEmo post. Yes, the CD standard does need to be put into historical context but despite your statement, you have not put it into historical context, you've created a historical context that never actually existed in order to push your "filters" agenda again!
2. No CD technology was yet developed when the redbook standard was created because you obviously can't have a CD or CD player before you've created a standard that defines what CD actually is! HOWEVER, oversampling as a technology certainly was known about and it's use was envisaged.
2a. These statements are all true BUT ENTIRELY IRRELEVANT because despite your misinformation, oversampling was developed and ubiquitously employed by CD players by the time that CDs were launched to consumers. As far as I'm aware, ALL CD players, from launch day onwards, employed at least 2 times oversampling.
3. Great, now we're getting somewhere. According to the actual history (that ALL CD players had oversampling) and YOUR statement that "oversampling essentially eliminates this issue entirely", then logically you must agree that the "issue" you've raised never actually existed by the time CD was launched to the public (1983) and is therefore irrelevant!
This statement is false. 80dB SNR + 20-25dB Noise reduction does NOT result in a dynamic range of 100-105dB!! This is a classic case of "cherry picking"; of only listing those facts which support an agenda while omitting the other pertinent facts which contradict it. The reality is that the original recording session tapes (with up to 80dB SNR) were OF COURSE, NEVER RELEASED TO THE PUBLIC. What was released was a several generation old copy: The recording session tapes would have to be edited, mixed (EQ, compression, etc.) and commonly "bounced down" (recording several tracks to 1 or 2 tracks), each of these mix processes adds noise. When the mix is complete it's bounced down to another (final mix) tape, for transfer to the mastering engineer, the mastering engineer applies analogue processing, which adds more noise, often bounces down during the mastering process and then when the mastering is complete, then bounces down the completed master to another (master) tape. Each of these bounce downs (generations/copies) doubles the amount of tape noise and there would have been an absolute minimum of 2 generations but probably 4 or more. Then, the master tape was copied to a production master and finally the distribution media (cassette or vinyl) was copied from the production master, so another two generations. That's a bucket load of noise that's been added between the original session recording tapes and the final media the consumer buys, so what that 20-25dB noise reduction (or more like 15dB in the more common NR types) actually achieves is some restoration of the 80dB SNR we may have started with.
In the best theoretical case, if we were just making a test tape, we could record a test signal to tape (with say 80dB SNR) apply say 25dB noise reduction and bounce down the result back to tape. That's 1 generation of SNR loss and therefore: 80dB SNR - approx 6dB generational loss + 25dB NR = a theoretical max of roughly 100dB DR. Of course though, we end up with just one test tape! We can only theoretically achieve this DR figure by eliminating all the: Editing, mixing, mastering, creation of a production master and the duplication of it to create the actual consumer product. In the real/practical world of commercial consumer audio recordings the actual equation is more like: 80dB SNR - approx 35-45dB generational loss and analogue processing noise + 15-25dB NR = a theoretical max of roughly 55-65dB DR, which is roughly 100 times less than analogsurvivor is claiming and why all his conclusions/assertions are complete nonsense! And of course, we're only considering noise and ignoring all the other non-linearities and distortions of analogue.
In this sub-forum we tend to focus the details of digital theory and it's implementation, however in the world of commercial recording studios and those who actually make the music products, the single greatest benefit and selling point of digital audio over analogue (which blew all other considerations out of the water), was the elimination of generational loss!
I'm not sure about the context of that quote. There really wasn't any digital mastering in 1985, it didn't become even a practical possibility until a decade later and it was almost another decade before the mastering tools had improved to the point that mastering in the digital domain became a viable alternative. Remember that contrary to popular belief, the SPARS code (AAD, DDD, etc.) did not refer to the domain of the procedures but the domain of what those procedures were recorded to. For example, if we record the musicians to digital recording media, mix it in the analogue domain then record that final (analogue) mix to digital, then master in the analogue domain and record the completed master to digital, the SPARS code would be "DDD" (even though it's been both mixed and mastered in the analogue domain). If we're talking about the actual processes, then with the exception of a very few classical recordings (a couple of labels had proprietary digital systems and minimal mixing and mastering), pretty much all recordings up to the mid/late 1990's should have been labelled DAA, then gradually DDA and finally, DDD would have started appearing in the early 2000's.
G