Are there any sonic differences between a original CD and a burnt copy?
Mar 26, 2002 at 11:58 PM Post #17 of 37
A bunch of things that are similar at first glance but really nothing like CD-Rs:

Quote:

a better digital cable


In terms of data there is no "better" unless you mean one is faulty and one isn't. There is better in terms of appearance, flexibility, durability... I won't touch jitter. But bits are still bits. They go through or they don't. Can a cable magically slow down one bit and leave another untouched? But that's jitter...

Quote:

a DIP (Monarchy, Theta) reclocker


Different purpose. A reclocker corrects the timing of the bitstream once it has been removed from the CD - but if the bitstream was already 100% within the tolerance of the DAC then these too are useless. This is to do with shuffling the bits in the time domain. The bits are either still there or not there.

Quote:

a better brand of CDRs


"better" might have to do with stability over time or even that the label is nicer to write on with a sharpie. Maybe you like to have black or silver or blue. But if they store the bits that are put on them and your CD transport can read the bits off again there is zero difference from the original. This is why we use digital systems.

Quote:

better digital connectors/jacks (including XLR jacks)


Durability, looks. They pass bits or they don't. Isn't XLR for balanced analog? That's completely different.

Quote:

decided to use coax over optical or optical over coax for any reason other than cost


If your coax transmits digital data and so does the optical, where's the difference? Data is data, and digital is not analog. It can't change just a little bit, if it changes it stops working completely. Digital audio systems will approximate missing data when these errors occur and you will hear clicks or other glitches, but read the next paragraph carefully:

If you take a CD and make a copy of it, then burn it onto a CD-R, and verify that each bit is identical with the source, then you have a perfect copy. If you listen to the audio from each using the same mechanism that verified that the copies were identical there is no difference in the sound that comes out.

If you dispute that then you can dispute it with yourself. This argument has been done before, and all that was learned was that some people like to believe in fantasy and some people don't. The enjoyment of music is subjective, whereas the appraisal of equipment can be either. If having a fat digital cable or a certain brand of CD-R makes you think the music is different, I hope you enjoy it. Wouldn't have that effect on me.

Sorry if you don't like the part about being taken in. From my point of view that's what it is. To you there is a noticable increased blackness to the silence, and a palpable increase in soundstage. You should feel sorry for me, because I can't hear that improvement. All I have are greenbacks, small consolation.
 
Mar 27, 2002 at 12:30 AM Post #18 of 37
Quote:

Originally posted by aeberbach
A bunch of things that are similar at first glance but really nothing like CD-Rs:

Durability, looks. They pass bits or they don't. Isn't XLR for balanced analog? That's completely different.

Sorry if you don't like the part about being taken in. From my point of view that's what it is. To you there is a noticable increased blackness to the silence, and a palpable increase in soundstage. You should feel sorry for me, because I can't hear that improvement. All I have are greenbacks, small consolation.


There is an XLR connector for digital cables as well. (No point, just expressing why I mentioned it. There are different connector types.)

The bunch of things you listed, to me, seemed quite a bit like CDRs. As expected, you disarmed and dismissed all of them the same way you did CDRs. My purpose wasn't to criticize, or even disagree, necessarily. My purpose was to lump all things digital together because when someone chooses the religion of "bits is bits", the argument covers all of these things. Generally speaking, I think people belong to one religion or the other... either the digital cables, CDRs, connectors, reclockers, etc. matter, or they don't.

Now that that's been said, I do belong to the other religion and we can argue from there. You seemed to want to avoid the issue of jitter. I've seen the word jitter thrown around a lot and used in different contexts. For the purpose of this discussion, when I say jitter, I'll mean "errors having to do with the timing of digital data transmission." From what I've read, I think jitter probably is responsible for what I discern to be an audible difference between all of those things that the followers of my religion believe in. (Other than, as you said, the part where things work or don't work--we do believe in that part too so we're not so completely different).

So, in summary, our people believe in jitter and your peoples do not believe in the jitter. Would you say this is accurate?

My second point has to do with that thing that we both believe in--data loss. Sometimes data is lost. Data is lost on the internet. Data is lost on my hard drive. Power surge one night and corrupt sectors in the morning. It happens. Can data be lost when transferring data from one CD to a hard drive and then to another? (Keep in mind, this isn't the jitter debate, I've already confessed to my belief in that diety--we're not talking about the timing of data arriving here, we're talking about the whether or not.) Let's say my original CD has a scratch. Don't CD drives have corrective measures for smoothing over the lost data?

If so, let's say my original CD has a smudge or a fingerprint. Then the corrective measures in the reading mechanism kick in. When I remove the two CDs, I have one CD that has a smudge and one brand new CDR with the alleged duplicate data. I then wipe the smudge away on the original disc. Did the corrective measures compensate in such a way as to provice a duplicate without knowing the actual data consealed by the smudge? I don't know the answer to this question. If the data could be different in this instance, why then, could it not be different by other means that effect the reading, storage and rewriting of that data?

Your premise of data is data is fine so long as the data isn't corrupted.

In my opinion, the differences between a CD and a copy of that CD have to do with both jitter/timing errors and with data corruption. I'm not a physicist and I don't claim to fully understand everything behind what happens in terms more complex than I've expressed here. But I do know this--both of our religions have many intelligent members. I would not be so quick to dismiss the followers of either.
 
Mar 27, 2002 at 1:19 AM Post #19 of 37
Data corruption, time smearing, etc I still think most of this stuff is bogus nonsense standards that mit/harvard nerds helped create for those who would profit from the exploitation of this junk. I don't deny the stuff, as it is science and probably is "proved" by electronic gizmos, but does this nano **** really make a discernable difference?

Advance technology (precise)
or
practical enjoyment (crude)?

*sort of like paying premium for sophisticated, superior electrostatic vs spending what you got for simple, quality drivers
 
Mar 27, 2002 at 1:50 AM Post #20 of 37
Perhaps I wasn't clear enough in my really long post, so here is another.

Jitter matters when you don't have flow control. Like my Toslink example, the sender is forced to send data at exactly the same rate as the receiver can consume it. Any significant variations will cause problems. Without buffers, late data causes artifacts because it didn't get to the receiver in time. Early data causes artifacts because the receiver is not ready to use it quite yet. Buffers can help smooth out jitter -- the bigger the buffer, the less effect jitter has. However, it has its limits, you can still overrun or underrun a buffer and it introduces latency because you have to wait for the buffer to fill up Edit: [and for bits to travel through it].

Bits are not always bits, you can get errors from EM interference, impurities in fibre or what not. The probabilities can be reduced significantly. But when they happen, you are SOL, either use the broken data, try to fix it or drop it... unless you have flow control and error detection / correction. Even then, they only help to reduce the probability of errors getting through. In typical circumstances they are enough.

Why does the quality of digital^H^H^H^H^H^H^H S/PDIF cables matter? Because THERE IS NO FLOW CONTROL AND NO ERROR MANAGEMENT!!!!! You absolutely and completely at the mercy of jitter and errors. Don't believe me? There's less than a foot of wall between my computer and the washer / dryer. Every time the motors change speeds or the buzzer goes off, my coax connection to my receiver goes absolutely bonkers!!!

The reason people prefer coax over optical is that if you're sneaky, the sender can get limited feedback from the ground wire. The sender can get a feel for the jitter on the wire and compensate by speeding or slowing the data flow. But with errors, even if the sender knew, S/PDIF doesn't support retransmits.

Now, the Ethernet line from my Audiotron to my MP3 server runs along the same path. When the EM storm hits, the collision / error indicators on my hub lights up. But, because it has extra bandwidth, flow control and error detection, Edit: [it can retransmit damaged packets and refill the player's buffers before they run out].

As long as we have to put up with a crappy interlink like S/PDIF, people will have to spend hundreds or thousands to get the same performance as two $20 Ethernet cards and $10 worth of CAT5 cable.

Edit: [To be fair to S/PDIF, since it was based on AES/EBU which was designed for studio work, they intentionally removed buffering, flow control and error management in favour of low latency & real-time performance. Studios could shell out the $$$ for the super high quality XLR differential cables and pay for the extra hardware to ensure reliable digital transmission.]

Now, with redbook audio, yes there are error detecting and error correcting codes. Add EAC like rereading algorithms and you can get perfect reproduction off most CDs. The problems arise when you have rediculously damaged CDs or equipment that doesn't try hard enough.

At the end of the day, both camps are right. What's really wrong are the protocols and implementation used by consumer equipment. Edit: [Now everybody say sorry and make up
smily_headphones1.gif
]
 
Mar 27, 2002 at 12:26 PM Post #23 of 37
Quote:

If so, let's say my original CD has a smudge or a fingerprint.


This is the crux of the argument. IF you get every bit off the original and IF you put evey bit on the copy - there is no difference. None. If you rely on an extraction program without error correction and detection (i.e. just about anything except EAC with a good drive) chance are you're not making a perfect copy, just an approximation. Of course an approximation sounds different.

This is why MP3s have such a bad reputation. Most people make them without EAC, and the results are well known. EAC has uncovered problems with my CDs while I was turning them into MP3s and I have replaced a couple that I could not successfully re-polish.

Point being, the fact that the copy is a CD-R does not matter, any more than a pianola roll would sound different if it were made out of plastic rather than paper. It's nothing but a medium, and does not affect the sound.

One more thing about data being lost - if it gets lost from your hard drive the effect is usually immediate and obvious and luckily that happens very infrequently. Data generally doesn't get lost when reading data from a CD, but when it does you will know about it and probably recover if you use a program like EAC. You won't know if relying on DAE. Burning the copy is less of a problem - if I can burn 700M of data onto a CD and read every bit back and view the images/run the program or whatever, I'm confident it is 100% perfect. If it isn't the chance of the checksum not catching the error and telling me the data is corrupt is very small indeed.
 
Mar 27, 2002 at 11:49 PM Post #24 of 37
Just throwing out an idea no one brought up...

Know how people paint the edges (and sometimes tops) of CDs green, or black or something? The idea being that laser light will scatter within the acrylic, and be internally reflected back out, resulting in occasional 'missfires'. The CD player gets a flash of light, and thinks its a bit, where in fact, no bit exists. The green or black edges are intended to absorb, instead of reflect, this light. Many people swear by it, and diligently paint the edges of their discs with little specialty pens. Many people claim to find a big advantage to this, and there are even lots of companies that sell products to help you do it.

A CDR, has a different reflective nature than the aluminum in a regularly pressed CD. The blue or green tint is due to the dyes absorbing red light. Perhaps this additional level of absorbtion is enough to limit scattering of the red laser light within the acrylic. Perhaps CDRs come already 'fixed'.

Lets take it to another level:

Think about those BLACK cdrs, i think they look very cool, but beyond that, here are some reasons those might be good to use.. The black surface, oddly enough, actually has a HIGHER reflectivity rating (from a perpendicular source) than normal CDRs. Hence, these black cdrs will often work in players that don't normally handle CDRs very well. But from any other angle, the reflectivity is quite low, in fact, low enough to make the recordable surface appear JET BLACK. Perhaps this absorbtion would cause even less internal reflection than even a properly 'treated' normal CD.

Now, i haven't actually sat down to listen for this, they are just hypotheses i've built off my understanding of various types of CDR media. Also, just a note to those who swear by a certain brand, you may want to try to research the source of the discs. Most major companies (memorex, kodak, TDK, Imation, etc.) don't actually make their CDs, they just accept bids from other manufacturing companies. So one batch might be great, but the next batch might be from a poor factory, and not up to snuff! There are a few companies out there that sell direct from their factory, so if you try them and like it, then you can continue to buy from the same source, instead of always guessing on your source.

I'm a big proponent of the 'bits are bits' idea. But there are physical things, such as this internal reflection, that can cause the data to be inaccurately represented. And as we all know, garbage in, garbage out. This internal reflection issue shouldn't hurt the actual CD burning process though, assuming you have good error control. Those 'missfires' would get picked up in the parity check.

A properly burned cd will contain EXACTLY the same information as the original. Try running a checksum on the data from the original, and the data on the copy, and you'll come up with the exact same value. I don't think the question is 'is there a sonic loss due to the cd burning process' but rather 'is there a sonic difference due to the different materials used in the CDR'?

Peace,
Phidauex
 
Mar 28, 2002 at 5:37 AM Post #25 of 37
Quote:

Originally posted by aeberbach
If you take a CD and make a copy of it, then burn it onto a CD-R, and verify that each bit is identical with the source, then you have a perfect copy.


Unfortunately, most error-detection software bypass the Redbook and only look at the audio data, so this is not necessarily true. Have you listened to different digital cables? It's not just audible clicks and such, almost all DAC's have digital filters that will smooth over almost all single bit errors, but then you're listening to the filter, not the music.

This next part sounds harsh: Quote:

If you dispute that then you can dispute it with yourself.


Then why post? Here, I'll adopt the same tone: Have you actually compared digital cables? Well, until you have, then you can argue your theoretical treatises until you're blue in the face, it will fall on deaf ears, because I have actually listened, and I can tell the difference.

phideaux - please don't throw up.
wink.gif
 
Mar 28, 2002 at 5:23 PM Post #26 of 37
Well, I don't any digital cables, so I cannot say how different digital cables sound from another. However, digital cables are prone to interference and signal skewing like any other type of cable. Even fiber-optics suffer from signal skewing and signal strength loss. The faster these data transfers become, the worst the problem becomes.

dlow, I think you explained very well what I tried to explain in the previous time this thread appeared. Basically, reliable communication depends on many things including the protocols and the implementation of the protocols. If the devices cannot catch the error or don't care about the errors, then there is a chance that information can be lossed.

BTW, hard disk drive errors can occur every so often. No device is perfect and according to the staticians and engineers who design these products, they usually have to figure out the failure rate. Sometimes this is measured in how much information is transferred before an error occurs. Because these error rates are so small, they are deemed acceptable for most consumers. For mission critical applications, creative solutions have to created to prevent these aberrations and usually these solutions only reduce the probability of error (usually redundancy is the simple answer).
 
Mar 28, 2002 at 11:32 PM Post #27 of 37
Quote:

Then why post? Here, I'll adopt the same tone: Have you actually compared digital cables?


Oh, did I imply that I had? Here's what I said:

Quote:

If you take a CD and make a copy of it, then burn it onto a CD-R, and verify that each bit is identical with the source, then you have a perfect copy. If you listen to the audio from each using the same mechanism that verified that the copies were identical there is no difference in the sound that comes out.


Now excuse the hell out of me if you think it's rude that I refuse to debate that, but it's a fact.
 
Mar 29, 2002 at 12:04 AM Post #28 of 37
After reading phidauex's article, I remembered reading about the actual physical structures of bits on CDs.
This information can be found on the web, but I can't remember where. It gave me an idea that I will attempt to explain.

First up is a summary of the structures of some poor CDs and how they come to be based on these article(s).
Basically, master CDs are used to make 'sub' masters to be used in the CD manufacturing process . These 'sub' masters get worn down over time in the CD pressing process.
The 'hills' and 'valleys' that correspond to ones and zeroes on the 'sub' master CD actually start to look like hills and valleys.
Thus over a period of time, the CDs you buy in stores are not as good as the first pressed copies.

Taking that into account, cheap players may have a harder time discerning if a 'little' hill on these subpar CDs is a
one or a zero. Usually publishers replace the 'sub' masters with new ones but not as quickly as they should at times, so I have read.
This may partially explain the connection between CDs pressed in Japan and over here as far as quality is concerned, as kelly mentioned earlier.


If you understand and believe the previous explanation then it isn't to hard to believe the following hypothesis, I think.

Now, if you burn a copy of a subpar CD on a burner that can acccurately discern if the 'little' hill is a one or zero and burn it structurally better than it is on the subpar CD, then you would think they would play exactly the same. Well, on the CD drive burner that read the original properly I think you would be correct, assuming no other outside variables.
However, the copy may actually play better on the cheap CD player such as a portable because it can read the copy accurately
compared to the subpar CD.

This is just a simple idea I have with no actual proof to back it up.
I don't have time to find a copy of the article at this moment to back up that part of the idea.
Also, for all I know, the 'valleys' burnt in copies are even worse than original CDs.
Dissect this if you will and prove me wrong for I really would like to know.


Edwin
 
Mar 29, 2002 at 8:22 AM Post #29 of 37
Does anyone know the std. deviation/variance...in failure/error rates between CD Transports across different brands and within a companys own sub brands?
 
Mar 29, 2002 at 11:51 AM Post #30 of 37
It would make an interesting experiment to rip a CD using EAC and burn a duplicate (taking note of the CRCs generated by EAC). Double check that the CRCs are consistent and then put some sandpaper to the copy and try it again on EAC as well as regular CD players.

If you can get EAC to work hard (but still give accurate results), you can then use that CD to test regular CD players. It should be a hell of a lot more empirical than "I think it sounds more veiled" or "The highs and mids are lacking in the scratched copy". Chances are with that bad of a CD, you'll actually get really bad distortion or clicks and pops. But you'll know that it is still possible to get a perfect playback with really smart hardware. I suspect that most consumer CD players will fail this test miserably.

Another interesting variation is to burn the raw WAV files to a CD, scratch it up and run CRCs on the files to see if a data CD is more resistant to errors. Although I doubt that is the case because the ISO9660 file system doesn't add any extra error management above what's already built into the basic Mode 1 CD standard.
 

Users who are viewing this thread

Back
Top