Head-Fi.org › Forums › Equipment Forums › Dedicated Source Components › Master Clock Talk
New Posts  All Forums:Forum Nav:

Master Clock Talk - Page 5

post #61 of 76
Quote:
Originally Posted by udo View Post
I like the real world scenario's:

I have a NAD C521 CD player going into a Zhaolu D2.5C with Zapfilter.
For those of you knowing the NAD: would it be worth (audible...) enough to upgrade the clock in the player?

I also have a M-Audio DIO2496 soundcard which feeds the same DAC, could/should I upgrade the clock on the card as well?
As the relevant clock for the d/a conversion would be the one in your D2.5C in both cases, I'd expect only minimal to no improvement at all with upgraded clocks in your two digital sources. That's only a general assumption, though - I don't know how susceptible your D2.5C actually is to jitter on the spdif inputs...

Greetings from Munich!

Manfred / lini
post #62 of 76
Quote:
Originally Posted by morphsci View Post
A DBT is only as good as the statistical design it is used under and furthermore is not by definition more objective than a sighted listening test. In fact it is not even necessarily less biased, since that is a function of the design and specific measurements that are made.

Wooo ... that feels better.
Are you suggesting that sighted listening tests can be as reliable as blind tests, if so how come when sighted tests are followed by blind tests the differences detected by sighted tests often disappear i.e in Masters and Clark amongst others ?. The subjects experience a real certainty that big differences exist and describe them in great detail but when denied visual
cues they cannot detect the difference.

Sighted tests are fine but they allow all sorts of human biases to come into play i.e expectations, for instance I have an irrational liking for Rotel gear, I have a Rotel CD player (I have owned several in fact) and a Rotel Integrated amp, if I was asked to listen to a Rotel and a NAD my predisposition would be to prefer the Rotel even if they sounded identical.

You should read "Toole and Olive" (1996) in it they showed how sighted listeners perception of subjective quality were heavily biased by knowledge of what they are listening to i.e brand, physical apperance and so on and that when stripped of these cues the results were very different under otherwise identical listening conditions.
post #63 of 76
Quote:
Originally Posted by hciman77 View Post
Are you suggesting that sighted listening tests can be as reliable as blind tests, if so how come when sighted tests are followed by blind tests the differences detected by sighted tests often disappear i.e in Masters and Clark amongst others ?. The subjects experience a real certainty that big differences exist and describe them in great detail but when denied visual
cues they cannot detect the difference.

Sighted tests are fine but they allow all sorts of human biases to come into play i.e expectations, for instance I have an irrational liking for Rotel gear, I have a Rotel CD player (I have owned several in fact) and a Rotel Integrated amp, if I was asked to listen to a Rotel and a NAD my predisposition would be to prefer the Rotel even if they sounded identical.

You should read "Toole and Olive" (1996) in it they showed how sighted listeners perception of subjective quality were heavily biased by knowledge of what they are listening to i.e brand, physical apperance and so on and that when stripped of these cues the results were very different under otherwise identical listening conditions.

DBT's aren't perfect. Ears are on humans, not robots. To get a statistically valid test several replications are required. People have short attention spans, there is a lot of psychological and physiological factors that influence the tests as much as the factors you examining. You end up with tests that can take weeks to be truly valid.
post #64 of 76
Quote:
Originally Posted by hciman77 View Post
Are you suggesting that sighted listening tests can be as reliable as blind tests, if so how come when sighted tests are followed by blind tests the differences detected by sighted tests often disappear i.e in Masters and Clark amongst others ?. The subjects experience a real certainty that big differences exist and describe them in great detail but when denied visual
cues they cannot detect the difference.

Sighted tests are fine but they allow all sorts of human biases to come into play i.e expectations, for instance I have an irrational liking for Rotel gear, I have a Rotel CD player (I have owned several in fact) and a Rotel Integrated amp, if I was asked to listen to a Rotel and a NAD my predisposition would be to prefer the Rotel even if they sounded identical.

You should read "Toole and Olive" (1996) in it they showed how sighted listeners perception of subjective quality were heavily biased by knowledge of what they are listening to i.e brand, physical apperance and so on and that when stripped of these cues the results were very different under otherwise identical listening conditions.
What I am suggesting is that DBT's are not necessarily better than sighted tests, nor are sighted tests necessarily better. What I am saying is both are merely testing methodologies and the results are part of a statistical design which has a large influence on the generalizability of the results. In addition there is no DBT, there are only specific DBT's that have been run. Each is essentially unique in detail and these details are critical to the interpretation of the results.

My larger point is that I, like many others do not care. I have no reason to delude myself that something is better in my system so a DBT is less valuable to me than extended listening. As an example, if you compare my 2-channel system today to the system 2 years ago only the transducers (speakers and headphones) are the same, all of the electronics are different as are IC's, power cords and speaker cables. Some items are more expensive and some are less expensive but all sound better to me (or are filling in until I find something that sounds better) than the items they replaced.

Anyway, I am not stating that DBT's are bad, just that in their implementation they might not really add anything and to the layman they may seem "more scientific" and thus receive more credibility than they deserve. I did appreciate your initial few posts for adding a much needed reality check but the last few seemed to get a little preachy for me.

Quote:
Originally Posted by regal View Post
DBT's aren't perfect. Ears are on humans, not robots. To get a statistically valid test several replications are required. People have short attention spans, there is a lot of psychological and physiological factors that influence the tests as much as the factors you examining. You end up with tests that can take weeks to be truly valid.
Exactly. Replication is crucial to statistical power both in its magnitude and in its implementation, i.e random, paired, nested, etc.. Much of the limitations in the scientific tests of components is because they do not take account of the biological and psychological factors since the researchers are not experts in those fields.
post #65 of 76
Quote:
Originally Posted by udo View Post
I like the real world scenario's:

I have a NAD C521 CD player going into a Zhaolu D2.5C with Zapfilter.
For those of you knowing the NAD: would it be worth (audible...) enough to upgrade the clock in the player?

I also have a M-Audio DIO2496 soundcard which feeds the same DAC, could/should I upgrade the clock on the card as well?

Would I need an 'expensive' clock or could a better specced 1pps clock (like I have in my Zhaolu 1.3, hanging from rubber bands) be enough?
ppm's are of no use when specifying jitter

for an explanation you may look here

http://www.tentlabs.com/InfoSupport/...34/page34.html

best

Guido
post #66 of 76
Quote:
Originally Posted by lowmagnet View Post
How would a new clock inside a source help any if the receiving end of the pair has the dual disadvantage of an unshared clock and possible interface jitter? I'm just thinking $300 would be better spent elsewhere.
Hi

The receiving end (very likely to be DAC) is an anttenuator in terms of jitter, hence, less jitter in means less jitter out.

Try it and you'll know. Take into account that improving the source implies upgrading the clock and the SPDIF output (reclocking, galvanic isolation and impedance matching)

best

Guido
post #67 of 76
Quote:
Originally Posted by lini View Post
As the relevant clock for the d/a conversion would be the one in your D2.5C in both cases, I'd expect only minimal to no improvement at all with upgraded clocks in your two digital sources. That's only a general assumption, though - I don't know how susceptible your D2.5C actually is to jitter on the spdif inputs...
In the DAC?
What about the clock in the sources then?
The D2.5C has an upgraded clock (pics somewhere in the Zhaolu mod thread) which is temperature compensated.
The whole thing (D2.5C with the 'simple' upgraded clock and Zapfilter) sounds great to my ears. I am just wondering why the DAC clock is the deciding factor according to your statement; I was more thinking that the DAC slaves to the SPDIF clock which is generated by the player.
How does the SPDIF clock interact with the DAC clock?
post #68 of 76
Quote:
Originally Posted by DarkAngel View Post
I have had both dacs and CDPs modded with Audiocom Superclock in the past, but for different view I found one well regarded mod guy at Audiogon who advises against aftermarket clock upgrades and says they add noise back into your CDP.......read why:

Clock Upgrade

He also thinks tubes are bad upgrade option for CDP vs carefully selected solid state part upgrades.

Any engineer with appropriate experience will tell you that a low-jitter clock installed properly will improve the audio quality by reducing the overall jitter. Whether this is audible will depend on the listeners acuity and the overall system quality.

You can of course have less than optimum results when they are installed improperly.

Steve N.
Empirical Audio
post #69 of 76
Quote:
Originally Posted by Filburt View Post
Looking at the way these things are installed, I'm left wondering if much of the perceived change in sound is, if not placebotic (which seems like a real possibility), caused by various noise injected by these devices into the system, or if some of the installations are actually producing very high levels of jitter. Has anyone done any measurements to analyse the effects of these devices and the manner in which they're commonly installed?
This is 90% of the problem, poor installation. I have seen modded components from lots of other vendors. I have yet to see one of them do it right yet....
post #70 of 76
Quote:
Originally Posted by Filburt View Post
Well, there are two inquiries really occurring here. One is whether the mods produce audible results, the second is why. The modifications may be producing audible results, but it isn't necessarily due to a reduction of jitter (in fact, perhaps the opposite is occurring or there is noise being produced, or something of that sort).
This may be the case in some of the anecdotes. However, I can assure you that lower jitter can make an audible improvement and be measured. This is what most of my products are about. My latest product, the Pace-Car I2S reclocker effectively minimizes jitter so as to be inaudible. Until you have heard this, you dont realize how much jitter is present, even in devices with properly installed Superclocks. I demonstrated this at CES in January. The improvement was immediately obvious to all listeners. Exceptional clarity and razor-sharp focus.

Steve N.
post #71 of 76
Quote:
Originally Posted by hciman77 View Post
So they cannot prove that they are in fact lowering jitter ?. In a TNT article I read a chap accidentally added 500ps of jitter to the already jittery Marantz CD67(iirc), he thought it sounded better, he was not aware at the time that he had added jitter but he was aware that he had changed the circuit so it is a sighted test.

Similarly in a white paper

http://akmedia.digidesign.com/suppor...tter_30957.pdf

I read that was touting a new (very low jitter) clock mechanism the subjective listeners tests showed that listeners preferred the sound from older more jittery clocks, I cannot remember if these were sighted or bind tests, I think they must have been sighted. Suffice to say the Authors of the paper were somewhat puzzled by this result.

This is easy to explain. Because there was so much "other" sibilance in the system, the jitter tended to "soften" the presentation by "de-focusing" the detail. In systems with very low "other" sibilance, this is not necessary and the result would likely be the opposite.

"Other" sibilance can be caused by noise and distortion in preamps, amps and sources, as well as cables that cause dispersion. Also ground-loops that cause HF noise in the system. It can even be the result of a poor recording. The quality and content of the recordings used in these tests is critical to the results.
post #72 of 76
Quote:
Originally Posted by audioengr View Post
This is easy to explain. Because there was so much "other" sibilance in the system, the jitter tended to "soften" the presentation by "de-focusing" the detail. In systems with very low "other" sibilance, this is not necessary and the result would likely be the opposite.

"Other" sibilance can be caused by noise and distortion in preamps, amps and sources, as well as cables that cause dispersion. Also ground-loops that cause HF noise in the system. It can even be the result of a poor recording. The quality and content of the recordings used in these tests is critical to the results.
So does that mean that if he had fitted a better clock it might have made it sound even worse by sharpening the presentation ?
post #73 of 76
Quote:
Originally Posted by morphsci View Post
What I am suggesting is that DBT's are not necessarily better than sighted tests, nor are sighted tests necessarily better. What I am saying is both are merely testing methodologies and the results are part of a statistical design which has a large influence on the generalizability of the results. In addition there is no DBT, there are only specific DBT's that have been run. Each is essentially unique in detail and these details are critical to the interpretation of the results.

My larger point is that I, like many others do not care. I have no reason to delude myself that something is better in my system so a DBT is less valuable to me than extended listening. As an example, if you compare my 2-channel system today to the system 2 years ago only the transducers (speakers and headphones) are the same, all of the electronics are different as are IC's, power cords and speaker cables. Some items are more expensive and some are less expensive but all sound better to me (or are filling in until I find something that sounds better) than the items they replaced.

Anyway, I am not stating that DBT's are bad, just that in their implementation they might not really add anything and to the layman they may seem "more scientific" and thus receive more credibility than they deserve.
Very well put.
post #74 of 76
Quote:
Originally Posted by hciman77 View Post
So does that mean that if he had fitted a better clock it might have made it sound even worse by sharpening the presentation ?
I leave it to Steve to answer your specific question, but I think many of us have experienced worse overall sound from adding a component that is actually of better quality, i.e., a more revealing source that sounds sibilance due to flaws elsewhere in the system.
post #75 of 76
Quote:
Originally Posted by hciman77 View Post
So does that mean that if he had fitted a better clock it might have made it sound even worse by sharpening the presentation ?
In that particular system using those recordings, probably.

Steve N.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Dedicated Source Components
Head-Fi.org › Forums › Equipment Forums › Dedicated Source Components › Master Clock Talk