Has the rationale for SACD vanished?
Jul 12, 2001 at 4:07 AM Post #16 of 22
in holy zoo's test, 96khz PCM is compared with DSD.

As i stated in the other thread, SACD uses more data than 96khz PCM (approximately equal to 117khz sampling rate at 24 bits). And DSD takes advantage of noise shaping to make it seem to sound even better. So DSD "should" sound better, because they are not comparing equavilent formats
 
Jul 12, 2001 at 5:09 AM Post #17 of 22
Part of the problem with discussions like these is:

1. People talk about exact frequencies or even frequency ranges as if they exist independently. This is not true even for a very simple tone such as that generated bya tuning fork. The interaction of the frequencies across the spectrum of the sound generates harmonics.

2. Music is almost always a complex mix of sounds in the frequency range, in the temporal range and in the amplitude range. The harmonics generated by these interactions are what give the music its character and the sense of "musicality" or "amusicality" we perceive.

3. Reproduction of musical performances by our equipment (both recording and playback) can be judged by how well they reproduce the harmonics we hear during a live performance.

I do not particularly care how well my equipment reproduces square waves or sine waves (or how well they can be recorded) because I do not particularly care to listen to square waves or sine waves. I prefer to listen to music.
 
Jul 12, 2001 at 5:41 AM Post #18 of 22
confused.gif


. Quote:

I do not particularly care to listen to square waves or sine waves. I prefer to listen to music.


please take a course in high school physics before saying somthing like that

Quote:

1. People talk about exact frequencies or even frequency ranges as if they exist independently. This is not true even for a very simple tone such as that generated bya tuning fork. The interaction of the frequencies across the spectrum of the sound generates harmonics.


1)when sound waves interact, they do not form harmonics. The interference pattern has no effect on how you hear sound. when two notes of different frequencies are played, you hear two notes, with no extra harmonics.

Quote:

Music is almost always a complex mix of sounds in the frequency range, in the temporal range and in the amplitude range. The harmonics generated by these interactions are what give the music its character and the sense of "musicality" or "amusicality" we perceive.


that made no sense whatsoever.

Quote:

3. Reproduction of musical performances by our equipment (both recording and playback) can be judged by how well they reproduce the harmonics we hear during a live performance.


really??? though harmonics isn't the correct term here you're stating the obvious

 
Jul 12, 2001 at 7:53 AM Post #19 of 22
morphsci,

every waveform can be represented as a sum of sine waves. That is what the fourier transformation is based on. Amplitude vs. time and amplitude vs. frequency are two possible views of identical things and both contain the same information.

Why 96/24 in the production? That is simple, because they edit the material after it has been recorded. And this editing introduces additional errors.
That is very similar to graphic editing, even if the printout has only a limited resolution, the source material has to be scanned at at least twice this resolution. Otherwise editing would introduce visible errors.
 
Jul 12, 2001 at 2:21 PM Post #20 of 22
Quote:

every waveform can be represented as a sum of sine waves. That is what the fourier transformation is based on. Amplitude vs. time and amplitude vs. frequency are two possible views of identical things and both contain the same information.


First of all, although we can represent any waveform as a sum of sine waves, this is just an approximation of the true waveform. Just as a fourier series is an approximation of an underlying function that is usually much more complex. I agree totally with your second statement about information content. When one speaks of amplitude, time and frequency, there are only two independent parameters.

Quote:

please take a course in high school physics before saying somthing like that


Ah! Now I understand why you argue as you do. Introductory physics is just that, an introduction. Perhaps you need to delve a little deeper into the physical reproduction of sound, higher mathematics and the concept of a models of processes versus the process itself. When you do that, why don't you get back to me. Thanks.
 
Jul 12, 2001 at 4:16 PM Post #21 of 22
Fourier transformation is an approximation only if you don't let the summation go to the infinity.

What I'd worry about more is whether the mathematical requirements required by the formula are actually satisfied in implemented systems. For example, every sample in a song should contribute to the final analogue signal at all times, although the contribution approaches zero fast as you go farther from the sample's own time. But when analog sound is being reconstructed, that really is an approximation.

I was just thinking... in PCM, you must use not only all past samples but also all future samples to achieve true reproduction. In DSD, you only need to use all past samples and have no need to know the future ones. By the nature of its design, DSD reconstructed analog output is just the sum of signal deltas (differences) from the beginning of recording till the given moment. With PCM, you need in theory to put a sin(x)/x curve around all samples and for any given point add them all up. In practice you use some kind of filtering but you never use all the samples from the future, just a certain amount of nearest ones. This is an approximation, and the DSD does not have it. Could this cause the difference that engineers are hearing? If so, they "only" need to increase number of samples used in recreation, i.e. increase the number of taps on digital filters, to improve the approximation. God knows the memory and processing power is cheap these days. Maybe they've overlooked that in the design, or screwed up, who knows.
 
Jul 12, 2001 at 6:13 PM Post #22 of 22
Quote:

Originally posted by aos
Fourier transformation is an approximation only if you don't let the summation go to the infinity.

What I'd worry about more is whether the mathematical requirements required by the formula are actually satisfied in implemented systems. For example, every sample in a song should contribute to the final analogue signal at all times, although the contribution approaches zero fast as you go farther from the sample's own time. But when analog sound is being reconstructed, that really is an approximation.

I was just thinking... in PCM, you must use not only all past samples but also all future samples to achieve true reproduction. In DSD, you only need to use all past samples and have no need to know the future ones. By the nature of its design, DSD reconstructed analog output is just the sum of signal deltas (differences) from the beginning of recording till the given moment. With PCM, you need in theory to put a sin(x)/x curve around all samples and for any given point add them all up. In practice you use some kind of filtering but you never use all the samples from the future, just a certain amount of nearest ones. This is an approximation, and the DSD does not have it. Could this cause the difference that engineers are hearing? If so, they "only" need to increase number of samples used in recreation, i.e. increase the number of taps on digital filters, to improve the approximation. God knows the memory and processing power is cheap these days. Maybe they've overlooked that in the design, or screwed up, who knows.


I agree totally. That is what I meant by an approximation. In a real life situation we cannot trully let the transform go to infinity. We can only let it approach infinity and cut it off when we think it is "close enough". Since close enough is somewhat arbitary I consider it an approximation. But I do not say that as a negative nor do I put any value judgement about how good it fits nor if it makes any realistic difference to what we hear.

I hadn't thought about the temporal problem in sampling from the future, but that is a very interesting point. Both processes (PCM and DSD) are approximations of the underlying analogue waveforms that make different assumptions about how to represent that waveform. Therefore, even without practical differences in implemetation I would not expect both to reproduce the underlying waveform the same. Again, whether this makes an audible difference is where the debate really centers and also I would not expect every individual to have the same preference as to which sampling scheme sounds better.

Unfortunately, the success of either format will in the long run probably not be decided by its technical superiority as much as by the marketing skill and amount of money behind it (ala Betamax as a consumer video format).
 

Users who are viewing this thread

Back
Top