iPod Classic stomps all over 5th gen

Oct 3, 2007 at 10:16 PM Post #91 of 154
In a way, you're right, the use of the term jitter isn't really accurate when it comes to ripping, but it's used nonetheless. From cdrfaq.org: (http://www.cdrfaq.org/faq02.html#S2-15)

Quote:

Subject: [2-15] What are "jitter" and "jitter correction"?
(1998/04/06)

The first thing to know is that there are two kinds of jitter that relate to audio CDs. The usual meaning of "jitter" refers to a time-base error when digital samples are converted back to an analog signal; see the jitter article on http://www.digido.com/ for an explanation. The other form of "jitter" is used in the context of digital audio extraction from CDs. This kind of "jitter" causes extracted audio samples to be doubled-up or skipped entirely. (Some people will correctly point out that the latter usage is an abuse of the term "jitter", but we seem to be stuck with it.)

"Jitter correction", in both senses of the word, is the process of compensating for jitter and restoring the audio to its intended form. This section is concerned with the (incorrect use of) "jitter" in the context of digital audio extraction.

The problem occurs because the Philips CD specification doesn't require block-accurate addressing. While the audio data is being fed into a buffer (a FIFO whose high- and low-water marks control the spindle speed), the address information for audio blocks is pulled out of the subcode channel and fed into a different part of the controller. Because the data and address information are disconnected, the CD player is unable to identify the exact start of each block. The inaccuracy is small, but if the system doing the extraction has to stop, write data to disk, and then go back to where it left off, it won't be able to seek to the exact same position. As a result, the extraction process will restart a few samples early or late, resulting in doubled or omitted samples. These glitches often sound like tiny repeating clicks during playback.

On a CD-ROM, the blocks have a 12-byte sync pattern in the header, as well as a copy of the block's address. It's possible to identify the start of a block and get the block's address by watching the data FIFO alone. This is why it's so much easier to pull single blocks off of a CD-ROM.

With most CD-ROM drives that support digital audio extraction, you can get jitter-free audio by using a program that extracts the entire track all at once. The problem with this method is that if the hard drive being written to can't keep up, some of the samples will be dropped. (This is similar to a CD-R buffer underrun, but since the output buffer used during DAE is much smaller than a CD-R's input buffer, the problem is magnified.)

Most newer drives (as well as nearly every model Plextor ever made) are based on an architecture that enables them to accurately detect the start of a block.

An approach that has produced good results is to do jitter correction in software. This involves performing overlapping reads, and then sliding the data around to find overlaps at the edges. Most DAE programs will perform jitter correction.



 
Oct 3, 2007 at 10:23 PM Post #92 of 154
Quote:

Originally Posted by Skylab /img/forum/go_quote.gif
I thought that jitter that was introduced when ripping CDs while potentially making them sound worse, was NOT actually wed to the data itself, and as such could in fact be removed by doing a new clear extraction and rip.


Data on a CD or in a lossless digital audio file is just a series of amplitude samples. There is no timing information (clock) in the files; the timing is implied based on the file's sampling rate. i.e. For 44.1KHz CD audio the idea is that a new sample should be fed to the DAC every 44100 times every second at even intervals. Any variation of the intervals is technically jitter. This means that if for any reason the samples in your file no longer correctly represent a sample every 1/44100 of a second then there is inherent jitter in the file and you would have to do a re-extraction to correct the data. Note that in practice this should never happen as the jitter would have to exist prior to CD mastering or be introduced by your CD drive (unlikely on a modern drive). If the data in the audio file is correct and jitter free then you only have to worry about jitter occurring in the playback device and this jitter is completely unrelated to the data from your CD or file.
 
Oct 3, 2007 at 11:28 PM Post #94 of 154
how does the classic sound with lossless audio? and can it play lossless gaplessly?

also how does the sq of the classic compare to the touch?
 
Oct 4, 2007 at 12:11 AM Post #95 of 154
Quote:

Originally Posted by breezy_amar /img/forum/go_quote.gif
how does the classic sound with lossless audio? and can it play lossless gaplessly?

also how does the sq of the classic compare to the touch?



I only listen to lossless so cannot compare it to the Classic's performance with other codecs, but yes the Classic can play lossless without gaps.
 
Oct 4, 2007 at 12:40 AM Post #96 of 154
Quote:

Originally Posted by Stoney /img/forum/go_quote.gif
I am suspect of broad-brush declarations from prejudiced non-professionals with no evidence. I've heard myself, I've read for decades, I've interviewed the experts. But readers should research for themselves.


I am a professional, and I know what I'm talking about. If readers do what I did... which is spend a couple of days wading through the maple syrup of doublespeak on audiophile pages... they will see that the amount of time deviation that jitter represents is so tiny, it is totally inconsequential.

The degree of jitter in a reasonably good $150 CD player is totally inaudible. You can't hear it. I can't hear it. I doubt if a bat could hear it either.

Jitter exists for one reason... Low cost, high fidelity CD players from overseas threatened the high end audio market with extinction. Who's gonna pay ten times as much for a CD player with the exact same specs? In order to save their skin, audio snake oil salesmen picked jitter as the magical element that made their CD player worth ten to twenty times what an low priced import with identical specs sold for.

There is a good reason why high resolution formats like DVD-A and SACD and high end CD players and DACs are such a niche market. Human beings just can't hear what they are spending all that money for.

However, there are things that make a HUGE impact on the quality of sound of stereo systems at all price points- the quality of transducers, the room acoustics, frequency response imbalances and ESPECIALLY sloppy mixing and mastering in modern digital recordings. Until audiophiles address the things that REALLY matter, they flounder around bleeding money and getting nowhere.

No one needs to say, "Sorry about your wallet" to someone who made good choices. Once you do things the right way, you don't have the urge to constantly make random upgrades and meaningless modifications to divide zeros into littler zeros.

That is the truth from someone who has experience and isn't trying to sell you anything. Feel free to get mad at me for offering you this advice. I'll just shake my head and think of the Barnum quote.

See ya
Steve
 
Oct 4, 2007 at 1:10 AM Post #97 of 154
Quote:

Originally Posted by bigshot /img/forum/go_quote.gif
I am a professional, and I know what I'm talking about. If readers do what I did... which is spend a couple of days wading through the maple syrup of doublespeak on audiophile pages... they will see that the amount of time deviation that jitter represents is so tiny, it is totally inconsequential.


I largely agree with your points. Jitter is often given more importance than it deserves. I would add though that "time deviation" does not necessarily give a real indication of the problems you might experience due to jitter. One thing that is sometimes forgotten here is that even at a specific volume and frequency, differently shaped sound waves still sound considerably different. i.e. a 10Khz square wave sounds clearly different to a 10Khz triangular wave. It's hard to quantify exactly how much jitter really changes wave shape without looking at real waveforms, but I could imagine some extreme cases where it could be quite noticeable and could substantially change the character of the sound. Certainly at lower sampling rates (say 22Khz or 11Khz) I have no doubt it would be clear as day to anyone if the jitter was sufficiently bad.
 
Oct 4, 2007 at 1:28 AM Post #98 of 154
Quote:

Originally Posted by bigshot /img/forum/go_quote.gif
I am a professional, and I know what I'm talking about. If readers do what I did... which is spend a couple of days wading through the maple syrup of doublespeak on audiophile pages... they will see that the amount of time deviation that jitter represents is so tiny, it is totally inconsequential.

The degree of jitter in a reasonably good $150 CD player is totally inaudible. You can't hear it. I can't hear it. I doubt if a bat could hear it either.

Jitter exists for one reason... Low cost, high fidelity CD players from overseas threatened the high end audio market with extinction. Who's gonna pay ten times as much for a CD player with the exact same specs? In order to save their skin, audio snake oil salesmen picked jitter as the magical element that made their CD player worth ten to twenty times what an low priced import with identical specs sold for.

There is a good reason why high resolution formats like DVD-A and SACD and high end CD players and DACs are such a niche market. Human beings just can't hear what they are spending all that money for.

However, there are things that make a HUGE impact on the quality of sound of stereo systems at all price points- the quality of transducers, the room acoustics, frequency response imbalances and ESPECIALLY sloppy mixing and mastering in modern digital recordings. Until audiophiles address the things that REALLY matter, they flounder around bleeding money and getting nowhere.

No one needs to say, "Sorry about your wallet" to someone who made good choices. Once you do things the right way, you don't have the urge to constantly make random upgrades and meaningless modifications to divide zeros into littler zeros.

That is the truth from someone who has experience and isn't trying to sell you anything. Feel free to get mad at me for offering you this advice. I'll just shake my head and think of the Barnum quote.

See ya
Steve



Steve,bigshot, that was an excellent post, thank you.
 
Oct 4, 2007 at 4:29 AM Post #99 of 154
In the issues we've been talking about there's few black and white absolutes. Many things affect the sound output from a DAP to various degrees. Based on the measurements people have taken from the iPod Classic (including my own) there definitely appears to be something amiss in a few areas. These are things I personally have a lot of trouble hearing and certainly don't bother me in my day-to-day listening but you don't get graphs like those for the Classic when all is perfectly fine. I personally think burn and jitter are very unlikely causes but the suggestion can't be called "wrong" without absolute knowledge of the devices. I'm surprised no one has talked of the 22.1Khz modulation causing intermodulation distortion Marc identified as well. Not audible perhaps but it sure makes a mess of the frequency response.

I recently took a look at the waveforms of a wav file containing a pure sine wave (don't recall the exact frequency) being played back on my Classic. On my oscilloscope the wave was reproduced reasonably well but the waves were thick and unfocused, I expect it's due to the 22.1Khz modulation. It's hard to capture in a photo, here's a reasonably clear one I had lying around but it's still hard to see.

osc.jpg
 
Oct 4, 2007 at 5:09 AM Post #100 of 154
I'm afraid without knowing what frequency I'm looking at, I can't deduce much of anything from the way the waveform looks. I also don't know what "thick and unfocused" looks like, so I don't know what to look for.

One of the big problems on this board is that too many people speak either in incomprehensible EE technospeak, or incomprehensible audiophool vagueries. None of us need to impress each other with out vocabularies. We need to occasionally slow down and define our terms for those who don't understand 100%.

Most importantly, we need to build a bridge between what the charts and numbers say and what they actually *sound* like. Too many people think frequency extension beyond 20kHz is important to good sound, that they can hear a difference between lossless compression and WAV files, that a volume difference of .05dB is audible, that timing errors are a serious problem in CD players, that burning CDs at 1x makes better sounding music than burning at 24x, that silver cables sound brighter than copper ones, and that music played back in 24 bit sounds better than the same track bounced down to 16 bit. All of these misconceptions are based on a lack of comprehension of the meaning behind the numbers being thrown around.

There's got to be a focus on logic and an understanding of scale.

See ya
Steve
 
Oct 4, 2007 at 6:00 AM Post #101 of 154
mirumu: I notice you measured the sine wave through the headphone out. It's been well documented in these (and other) forums that the headphone out of the Classic is sub-par. Is there any way you could measure the same sine wave through a LOD to compare?
 
Oct 4, 2007 at 6:02 AM Post #102 of 154
Quote:

Originally Posted by bigshot /img/forum/go_quote.gif
I'm afraid without knowing what frequency I'm looking at, I can't deduce much of anything from the way the waveform looks. I also don't know what "thick and unfocused" looks like, so I don't know what to look for.


Yes, it's certainly not clear, there's just not enough detail to make much out of it so I certainly wouldn't make much of it. I did compare the 4G and the results were very similar except that the waveform line on the screen was very thin/fine and stable (in comparison with the 6G). This tells me that the 4G is better at sustaining a truly accurate waveform but if it's audible at all it should be as intermodular distortion. Essentially this can occur when waves of two different frequencies are mixed together, i.e. the frequency the 6G is trying to represent mixed with the 22.1Khz modulation waveform. This would have the potential to sound artificial or very "solid state" and is generally unpleasant to the ear.

Quote:

Originally Posted by bigshot /img/forum/go_quote.gif
One of the big problems on this board is that too many people speak either in incomprehensible EE technospeak, or incomprehensible audiophool vagueries. None of us need to impress each other with out vocabularies. We need to occasionally slow down and define our terms for those who don't understand 100%.


Agreed.

Quote:

Originally Posted by bigshot /img/forum/go_quote.gif
Most importantly, we need to build a bridge between what the charts and numbers say and what they actually *sound* like. Too many people think frequency extension beyond 20kHz is important to good sound, that they can hear a difference between lossless compression and WAV files, that a volume difference of .05dB is audible, that timing errors are a serious problem in CD players, that burning CDs at 1x makes better sounding music than burning at 24x, that silver cables sound brighter than copper ones, and that music played back in 24 bit sounds better than the same track bounced down to 16 bit. All of these misconceptions are based on a lack of comprehension of the meaning behind the numbers being thrown around.

There's got to be a focus on logic and an understanding of scale.



I do think that's also true but not everything that seems implausible actually is a misconception either. If 16-bit 44.1Khz was enough then SACD wouldn't sound better for a start. Yes, the difference may be very small depending on your uses but there are some who say something is untrue purely because they don't understand the science involved or have never heard the difference themselves for various reasons. As you say, an understanding of scale is important.
 
Oct 4, 2007 at 6:34 AM Post #103 of 154
Quote:

Originally Posted by druelle /img/forum/go_quote.gif
mirumu: I notice you measured the sine wave through the headphone out. It's been well documented in these (and other) forums that the headphone out of the Classic is sub-par. Is there any way you could measure the same sine wave through a LOD to compare?


I did measure the line out too and the wave essentially looked the same as the headphone output. This is to be expected though as my measurements on the headphone output were not under headphone load. Where most earlier iPods performed especially poorly was under the load of a headphone. Essentially the headphone resists the voltage of the waveform and unless the amp can supply enough voltage to compensate for this then the waveforms would be dragged down corrupting their shape. Of course it sounds terrible when that happens. I don't know if I have the gear lying around to do a load test on the scope but I'll have a look, and if I can I'll post some pics.
 
Oct 4, 2007 at 7:03 AM Post #104 of 154
Quote:

Originally Posted by mirumu /img/forum/go_quote.gif
If 16-bit 44.1Khz was enough then SACD wouldn't sound better for a start.


In a home stereo situation, there is absolutely no difference between 16 bit 44.1 and SACD. The differences people hear are mastering differences, not the format itself. Most of the SACDs I've heard are completely different masterings than the CD equivalent, or even the redbook layer on the same disk. The only real advantage of SACD is multichannel sound.

I've worked on a 24 bit ProTools workstation and did plenty of recording and testing on it. I found that the increased resolution was all in the very low volume signals. The normal listening range was identical. For mixing, the increased resolution in the quiet passages was a godsend, because I could apply compression and bring up low level details in the mix, without bringing up noise with it.

But once the track was mixed, there was absolutely no difference between the 24 bit master and the 16 bit bounce down. The difference between 70dB and 120dB of dynamic range, just made no difference at normal listening volume. (Most people listen to their stereos at about 35-50dB)

You would have had to boost the volume to ear splitting levels to hear the difference, but it wouldn't matter, because you'd be deaf anyway. Again... it's all about understanding the scale of the thing.

See ya
Steve
 
Oct 4, 2007 at 7:28 AM Post #105 of 154
If you're applying compression then you're wasting the extra dynamic range 24-bit offers anyway. 16-bit will definitely sound just as good if you do that. What you described is akin to saying an someone with a walking stick can run as fast as an Olympic sprinter and you break the sprinter's leg when you come to do the test. The increased resolution in the quiet passages and lack of compression is precisely why people into Classical music love SACD.
 

Users who are viewing this thread

Back
Top