A few more comments. First, DustyChalk wrote: Quote:
That's just wrong. Jitter -- inasmuch as it matters -- is a property of the A/D and D/A processes. |
This statement is simply not true. Jitter is a property of the digital bitstream. A CD in an of itself has no D/A conversion -- it's simply a set of data. That data
on the CD can be jittery -- bits can be spaced in time on the disc incorrectly -- hence Sony's efforts described previously to come up with a digital mastering process which lays down
less jittery CDs. I only made this point in the first place because the graph shown in that first web reference can be easily misinterpreted. The distorted square wave shown on that diagram is an attempt to demonstrate distortions
not in the actual sound, but in the timing of the bits being fed to the DAC.
Now, I do agree that this only has a real-world effect once the data undergoes D/A conversion. But when we're making a D-D copy (remember the original assertion), jitter can be increased or decreased -- but in no way that I can imagine thus far, will the resultant data result in brighter sounding music when fed through the DAC.
Second comment: The essay I alluded to above, but deleted because it was a rambling mess, contained many of the same musings that aos's essay above contains. I described what jitter was in a real-world sense, and made various attempts to come up with a logical explanation for how jitter could result in what sounds simply like a
brighter recording of the original. I could come up with nothing that made any kind of sense. Again, since jitter is not consistent -- one bit can follow another
either too close behind
or too far behind -- the effects, while difficult to determine, will be random. You can pervert a sine wave (representative of a single frequency tone, in the simplest case) in
either direction; the large-scale effect of random perturbations is going to be a muddying of the sound, not an overall brightening. And when the jitter is SO bad that it actually causes significant data errors (as opposed to timing errors), the results, as I have pointed out numerous times already, aren't going to resemble anything musical in the first place.
Let's use aos's LCD screen analogy. Is a random perturbation of the spacing of the individual pixels going to result in a screen that looks normal, except that it's simply brighter? This would require that all pixels smaller than the standard size represent dark colors, and all pixels larger than the standard size represent brighter colors. But there's nothing inherent in the 'screen jitter' which would cause this to be so. i.e. We'd fully expect that some of the shortened pixels be bright colors and some of them be dark. The analogy to music is apt -- we'd expect the 'screen jitter' to muddy the picture somewhat, depending on the severity, just as I would expect jitter in a digital audio bitstream to muddy the sound if it were severe enough to be heard at all. Quote:
But music is a complex thing. It is not a crude symbolic representation of the state of our brain, that people invented in order to communicate. It is the cause of the state, not the state itself and not its associated symbol. Music is a change of air pressure. And while the effects of words and what they invoke in our brains is probably of same complexity as the effect of sounds, words and music are entirely different beasts. It is completely different process that adds another dimension of complexity. |
I don't see anything here that I necessarily disagree with. However, I do disagree with your implicit assumption that digital data representing text and digital data representing music are somehow fundamentally different. They're not. It's only after D/A conversion that the data becomes "a change of air pressure," and even then, the effects of jitter are ill-defined. Quote:
If you really think digital music is perfect, you should learn about the differences between UDP and TCP Internet protocols. |
Who is "you" referred to in red? I'm still assuming it's me. If it is, please explain to me where you got the impression that I believe digital music is "perfect." I'm considering dropping out of this time-sucking discussion entirely -- I'm getting pretty tired of correcting people who insist on arguing against points I haven't made. I say "I believe conjecture A," and someone says, "but how can you possibly believe conjecture B?"
Besides, I know all about the differences between TCP and UDP; I've been a systems/network administrator, and (currently) e-mail administrator for about a decade now. The TCP/UDP analogy can be applied to digital audio, but since you neglected to explain how lost data (UDP packets which never make the destination, in this analogy) would result in brighter sounding music. It seems the only person actually interested in addressing that question (other than me) is DustyChalk. Everyone else seems interested only in striving to get me to accept that jitter can make music sound
different -- something I've not only admitted is possible, but spent paragraphs explaining on this thread. I will be happy to continue discussing the question at hand -- but I will not be pigeonholed into defending assertions I never made in the first place. If there's any question whatsoever about what I've been discussing since the beginning of my involvement in this thread, please go back and reread it -- all of it. Thanks.