Why do USB cables make such a difference?
Status
Not open for further replies.
Oct 14, 2017 at 3:21 AM Post #331 of 1,606
You must be working with a unique and individual definition of "measurement" then. We work with standard units, volts, amps, watts, dB ratios, Hz, seconds...what about that is not meaningful?
I'm not going to guess, but anyone trying to make a scientific statement at least can reference someone's work who does have access to that equipment. (I am smiling...just a bit...about "lab grade oscilloscopes"...funny.)
ok....

Nope! That would closer to trying to define "margin of error".

Funny that you chuckle on the oscilloscope reference but then suggests that a measurement is a measurement. Before taking any measurement you have to be sure that you're doing the right thing, which requires that you actually understand what you're doing. Like those kids who take a $40 oscilloscope and measure the THD of a $20 class D amp. They measure 0.00001% and conclude it's perfect! But their oscilloscope have 20MHz bandwith and the amp they're trying to measure uses 1.5MHz pulses. The minimum error of their measurement is about 2%! There are plenty of rubbish measurements posted on the internet even by professional reviewers. In another forum, someone noticed something strange on the results of reclockers published in a popular website. When inquired, the author explained what he did and it turned out it was all complete nonsense. On the other side, there are some very good people out there publishing good things. So you have to do your own due diligence before believing in any results.

ps. you're probably not familiar with the concept of proof by contradiction of absurd. If you want to prove that a statement is right (e.g. all blonde people are dumb) and you find ONE sound case where the statement doesn't hold (e.g. Einstein was blonde and hell yeh he was intelligent), then your premise MUST be wrong (i.e. most blonde people may be dumb, but some are intelligent).

ps2. I've only seen Einstein photos with white hair. How intelligent is Pamela Anderson? :)

@castleofargh agreed, especially about fidelity (and honestly, I've seen something about these expensive "tuning" audio cables - utter rubbish). That's something hard to tell especially in light of the sound artificts that I mentioned before. My comment was intended to be on the side of logic (i.e. assume that this is correct -> then what happens?), I didn't imagine that would decrease the S/N ratio in the discussion but it seems it did.
 
Oct 14, 2017 at 3:45 AM Post #332 of 1,606
Funny that you chuckle on the oscilloscope reference but then suggests that a measurement is a measurement. Before taking any measurement you have to be sure that you're doing the right thing, which requires that you actually understand what you're doing. Like those kids who take a $40 oscilloscope and measure the THD of a $20 class D amp. They measure 0.00001% and conclude it's perfect! But their oscilloscope have 20MHz bandwith and the amp they're trying to measure uses 1.5MHz pulses. The minimum error of their measurement is about 2%! There are plenty of rubbish measurements posted on the internet even by professional reviewers. In another forum, someone noticed something strange on the results of reclockers published in a popular website. When inquired, the author explained what he did and it turned out it was all complete nonsense. On the other side, there are some very good people out there publishing good things. So you have to do your own due diligence before believing in any results.
Um....well, you're not going to measure THD at all with any oscilloscope, not to .00001%, or even 2%....wrong tool. And pulses aren't THD anyway, though they could be lumped into a THD+N measurement, we're pretty careful about separating that stuff now. THD analyzers generally have a bandwidth of 0-100kHz, so, according to your comments above, even a lab-standard THD analyzer from the likes of Tektronix, HP/Agilent or AP would be wrong. I don't think so....

But a lowly 20mHz scope would reveal any of the 1.5mHz pulses dribbling out of the output. I'm not sure what your point is, other than even you are suggesting the wrong tools.

My chuckle was at "lab grade oscilloscopes"...like it's even possible to buy a scope so awful that it wouldn't be useful for audio. The $300 Chinese stuff at Fry's is just fine, and the $1000+ Tektronix stuff is also great (though the best deals on Tek gear is on the used market). Mine's a Tek 100mHz analog/DSO...but I couldn't possibly measure THD with it. No one could. Wrong tool.
ps. you're probably not familiar with the concept of proof by contradiction of absurd. If you want to prove that a statement is right (e.g. all blonde people are dumb) and you find ONE sound case where the statement doesn't hold (e.g. Einstein was blonde and hell yeh he was intelligent), then your premise MUST be wrong (i.e. most blonde people may be dumb, but some are intelligent).
I don't think I know of any scientist who would be caught making the absurd argument to begin with. We're all pretty familiar with the danger of making sweeping absolute statements, and other such things as margin of error.

Now, marketing people...that's quite another story.
ps2. I've only seen Einstein photos with white hair. How intelligent is Pamela Anderson? :)
Poor kid. Has all her hair gone white now?
 
Oct 14, 2017 at 4:29 AM Post #333 of 1,606
There is in no high fidelity, or absolute sound, in reproduced music. There may be a crude, so called ‘high fidelity’, defined by positivists and that you can crudely and pragmatically measure as sound by frequency response and a few other relatively easy to measure characteristics, but that does not truly measure the difficult to quantify characteristics of nuance, timing, timbre, ‘space between the notes’ and everything else that comprises the beauty of music. An $80 DAC or cheap generic USB cable may well fit in that positivistic paradigm of commercial recording of ‘high fidelity’, but most of us can readily hear this falsehood of its reproduction of so call perfect ‘high fidelity’ sound, hence we spend our hard-earned funds to buy better and more musically enjoyable equipment.


Moreover, so called expectation bias cuts both ways. If you believe that $80 DACs are perfect, or that $12 USB cables make no difference to a dedicated audiophile cable in transmitting the bits and all the parasitical noise that tries to goes with them, then your expectation bias will not hear any difference in DACs or USB cables that do so much better in conveying nuance, timing, timbre, ‘space between the notes’ and everything else that comprises the beauty of music. However, if you are not biased and have open ears/minds, and using reasonably quality equipment, perhaps you might hear this important difference.
 
Last edited:
Oct 14, 2017 at 5:06 AM Post #334 of 1,606
There is in no high fidelity, or absolute sound, in reproduced music. There may be a crude, so called ‘high fidelity’, defined by positivists and that you can crudely and pragmatically measure as sound by frequency response and a few other relatively easy to measure characteristics, but that does not truly measure the difficult to quantify characteristics of nuance, timing, timbre, ‘space between the notes’ and everything else that comprises the beauty of music. An $80 DAC or cheap generic USB cable may well fit in that positivistic paradigm of commercial recording of ‘high fidelity’, but most of us can readily hear this falsehood of its reproduction of so call perfect ‘high fidelity’ sound, hence we spend our hard-earned funds to buy better and more musically enjoyable equipment.


Moreover, so called expectation bias cuts both ways. If you believe that $80 DACs are perfect, or that $12 USB cables make no difference to a dedicated audiophile cable in transmitting the bits and all the parasitical noise that tries to goes with them, then your expectation bias will not hear any difference in DACs or USB cables that do so much better in conveying nuance, timing, timbre, ‘space between the notes’ and everything else that comprises the beauty of music. However, if you are not biased and have open ears/minds, and using reasonably quality equipment, perhaps you might hear this important difference.
So you're choosing to ignore a few things then? Like how it's impossible not to be biased even if we might think we have open minds, or that the recordings we all listen to via DACs of all sizes, shapes and costs, were made with unknown quality ADCs (some pretty cheap), in studios using unknown (but likely generic) cables in high noise environments? You don't seem to be complaining of more than half the signal path, even though it's unknown and possibly the worst link of the chain relative to the short length of USB cable in a relatively noise-free home. Or that, aside from your statement that, "There is in no high fidelity, or absolute sound, in reproduced music.", it can be readily shown that it is impossible for a trained listener to discern the difference between a live performance monitored on speakers vs the same mix passed through ADC and DAC of very rudimentary quality.

I wonder what you might say to a true double-blind ABX test of...well, anything (not a single-blind test without significant data) where expectation bias is in fact actually removed.

But, since,"There is in no high fidelity, or absolute sound, in reproduced music", I guess the entire argument is moot.

We can all give up now and go home. Those of us in the pro audio business can just shoot ourselves and rid the world of our pervasive blight of low fidelity sound.

(...or thus would I respond to the troll, if I were to respond to a troll....)
 
Oct 14, 2017 at 5:09 AM Post #335 of 1,606
[1] Measurements of currents, jitter, ground voltages... those things are incredibly difficult to measure, and how much do they tell you quantitatively?
[2] ... according to the scientific method seeing no difference in something doesn't prove that no difference may exist - only if you rule out all possibilities you can claim that.
[3] I have the same file in different formats, all from the same studio master (and provided by the studio man himself). I play them in one DAC, they all sound the same. I play them in the second DAC - night and day difference. Does this mean it makes audio formats make a difference of not? What's the answer to the question "do audio formats matter?"

Oh dear, so much audiophile myth in just a couple of posts, it's difficult to know where to start. I'll just deal with a few points which, with just a small amount of factual knowledge, should be obvious but do not seem to be, presumably due to marketing designed to take advantage of audiophile ignorance/assumption.

1. How much do they tell you quantitatively? Effectively everything or, to put it another way, if there is something we can't measure/quantify then it's irrelevant. Just think about it logically for a moment, the audio recording/reproduction process is effectively the measuring/quantifying of sound wave properties, the conversion of those quantified values into other forms (electrical, magnetic, mechanical or in the case of digital, data) and then the conversion back again for reproduction. Therefore, even if audiophiles are correct and there is something we cannot measure or quantify, it's utterly irrelevant anyway because if we can't measure or quantify it, then we cannot record it in the first place and it does not exist on the recordings to which audiophiles are listening!!

2. While your statement is true, you are unfortunately lacking a vital piece of information/knowledge which means that it is INAPPLICABLE! There is a very old and widely used test, it's widely used precisely because it satisfies the scientific method and does exactly what you suggest, it rules out all other possibilities. It's called the "Null Test" and it does not require an oscilloscope, just basic (even free) audio software. If the result of a Null test is a null, then there definitively is no difference between two audio files/signals, with NO other "possibilities"! If the result is not null, then what we're left with is the difference between the signals/files and the magnitude of that difference can often easily be dismissed as below audibility. It's only when the magnitude of the difference (if there is one) reaches a level which *might* be audible that a listening test could be worthwhile.

3. Your two questions are an example of a very widely used audiophile marketing tool, the correlation fallacy. "I hear an audible difference when playing different formats therefore there must obviously be an audible difference between formats". At least in your case you are asking the question rather than making absolute statements of fact, which is a very good start (!) but still, your questions indicate a correlation fallacy and additionally, contradicts other statements you have made (for example the statement quoted in point #2). Why are you asking if audio formats make a difference, have you "ruled out all [the other] possibilities" that the "night and day" difference you heard has anything directly to do with different audio formats in the first place? For example, have you ruled out the possibility that simply seeing that one audio format is marketed as "high definition" is causing you to perceive a difference where there is none or that the device you are using to playback those different formats is creating an audible difference when there is none?

The audiophile world is rife with the assertion that measurements (and science) doesn't tell us everything. When/If science is quoted, it's typically done so out of context, with missing information (which invalidates it in that context), is only applied to confirm an audiophile assumption and is conveniently ignored or dismissed when that exact same science is applied consistently and contradicts audiophile assumption.

Much of the above could imply that most audiophiles are just exceptionally stupid, gullible morons but that implication is not my intention! Sure, audiophiles are ignorant and have gaps in their factual knowledge but then so does everyone, music recording/reproduction covers a very wide range of scientific, engineering and artistic fields and no one is without some level of ignorance. The difficulty for audiophiles is that there is a great deal of money to be made from filling those "gaps in their factual knowledge" with marketing BS and little/no money to be made from filling them with the actual facts! With today's marketing techniques, one doesn't have to be a gullible moron to fall for it, it can be difficult to resist even for the highly intelligent but nevertheless, many of the audiophile beliefs, myths and assumptions break down quite quickly if one is willing to learn some actual facts and apply some logic.

G
 
Oct 14, 2017 at 5:15 AM Post #336 of 1,606
Um....well, you're not going to measure THD at all with any oscilloscope, not to .00001%, or even 2%....wrong tool. And pulses aren't THD anyway, though they could be lumped into a THD+N measurement, we're pretty careful about separating that stuff now. THD analyzers generally have a bandwidth of 0-100kHz, so, according to your comments above, even a lab-standard THD analyzer from the likes of Tektronix, HP/Agilent or AP would be wrong. I don't think so....

But a lowly 20mHz scope would reveal any of the 1.5mHz pulses dribbling out of the output. I'm not sure what your point is, other than even you are suggesting the wrong tools.

My chuckle was at "lab grade oscilloscopes"...like it's even possible to buy a scope so awful that it wouldn't be useful for audio. The $300 Chinese stuff at Fry's is just fine, and the $1000+ Tektronix stuff is also great (though the best deals on Tek gear is on the used market). Mine's a Tek 100mHz analog/DSO...but I couldn't possibly measure THD with it. No one could. Wrong tool.
I don't think I know of any scientist who would be caught making the absurd argument to begin with. We're all pretty familiar with the danger of making sweeping absolute statements, and other such things as margin of error.

Now, marketing people...that's quite another story.
Poor kid. Has all her hair gone white now?

Very funny. So you're saying it's impossible to measure distortion with an oscilloscope and the foundations of logic are flawed. That's why this thread is going nowhere.
 
Oct 14, 2017 at 6:01 AM Post #337 of 1,606
  • Isochronous transfers are used to transfer data in real-time between host and device. When an isochronous endpoint is set up by the host, the host allocates a specific amount of bandwidth to the isochronous endpoint, and it regularly performs an IN- or OUT-transfer on that endpoint. For example, the host may OUT 1 KByte of data every 125 us to the device. Since a fixed and limited amount of bandwidth has been allocated, there is no time to resend data if anything goes wrong. The data has a CRC as normal, but if the receiving side detects an error there is no resend mechanism
The main problem is that the sampling rate requires transmitting 1kB every 125us, but no clock is perfect. The host might send every 124.75us. A naive receiver which simply buffers and clocks out at 125us (which might be, say 125.10us) will eventually find itself starved with an empty buffer and you get a skip as it paused to let the host catch up. This is why formats like AES/EBU embed the sample clock in the data stream, so the downstream component is effectively clocked by the upstream. (This is also required to maintain lip sync for video.) With the latter though clock recovery (the act of looking at the stream and locking a PLL to the embedded clock) is quite feasible, but how do you expect to recover clock from USB data packets? There is no way for the receiver to know the actual sample playback rate is slightly high. You can't lock a PLL to USB packets. Clock recovery becomes MUCH more complicated and the only realistic method is to maintain a buffer and clock according to its fill rate. Of course, larger buffers add latency and any algorithm used to recover the host clock based on a USB receive buffer is bound to have a bunch of pathological failure modes. In reality of course timebases move around, and an hour later it could be the host sending packets every 125.10us and the device trying to clock it out effectively at 124.75us thinking it's actually doing so at a rate of 125.00us per 1kB. This will result in buffer overrun, unless the receiver has an infinite buffer, and overrun will invariably result in some audible artifact. Clock recovery in USB playback is simply an intractable problem. Asynchronous playback solves this with flow control; the receiver plays at whatever actual rate it wants and simply tells the host when it reaches a low-water mark, asking it to refill to high-water. The host doesn't clock anything at all. This, though, in turn makes video lip sync a challenge.
 
Last edited:
Oct 14, 2017 at 7:08 AM Post #338 of 1,606
The main problem is that the sampling rate requires transmitting 1kB every 125us, but no clock is perfect. The host might send every 124.75us. A naive receiver which simply buffers and clocks out at 125us (which might be, say 125.10us) will eventually find itself starved with an empty buffer and you get a skip as it paused to let the host catch up. This is why formats like AES/EBU embed the sample clock in the data stream, so the downstream component is effectively clocked by the upstream. (This is also required to maintain lip sync for video.) With the latter though clock recovery (the act of looking at the stream and locking a PLL to the embedded clock) is quite feasible, but how do you expect to recover clock from USB data packets? There is no way for the receiver to know the actual sample playback rate is slightly high. You can't lock a PLL to USB packets. Clock recovery becomes MUCH more complicated and the only realistic method is to maintain a buffer and clock according to its fill rate. Of course, larger buffers add latency and any algorithm used to recover the host clock based on a USB receive buffer is bound to have a bunch of pathological failure modes. In reality of course timebases move around, and an hour later it could be the host sending packets every 125.10us and the device trying to clock it out effectively at 124.75us thinking it's actually doing so at a rate of 125.00us per 1kB. This will result in buffer overrun, unless the receiver has an infinite buffer, and overrun will invariably result in some audible artifact. Clock recovery in USB playback is simply an intractable problem. Asynchronous playback solves this with flow control; the receiver plays at whatever actual rate it wants and simply tells the host when it reaches a low-water mark, asking it to refill to high-water. The host doesn't clock anything at all. This, though, in turn makes video lip sync a challenge.

Correct. Most DACs nowadays, if not, at least those involved in the current discussion, are working in USB Asynchronous Transfer Mode.
Dealing with latency issues (lip sync), the main contributor is the multitap interpolation FIR (when used) rather than the input data buffer size, no?
 
Oct 14, 2017 at 7:08 AM Post #339 of 1,606
There is in no high fidelity, or absolute sound, in reproduced music. There may be a crude, so called ‘high fidelity’, defined by positivists and that you can crudely and pragmatically measure as sound by frequency response and a few other relatively easy to measure characteristics, but that does not truly measure the difficult to quantify characteristics of nuance, timing, timbre, ‘space between the notes’ and everything else that comprises the beauty of music.

You've made this argument before, it's typical audiophile nonsense based on the confusion between quantity and quality. That confusion is forgiveable in many audiophiles, especially with so much marketing material deliberately designed to propagate and benefit from that confusion but it is INCONCEIVABLE in someone who claims to be a university professor in the field of the philosophy of science?!

Quantity is the amount or number of something, in the case of audio; frequency and amplitude, which can be very accurately measured (and quantified) using universally accepted units designed specifically for the task; Hertz (Hz), cycles per second and dB, decibels, for example. Quality on the other hand is not a physical property, it is a purely human perception/judgement, it cannot be measured or quantified, there is no unit of measurement for it and the very best which can be achieved is a very vague, general consensus of human judgement/opinion. As I explained to @Leo-, we can only record what we can measure/quantify, if we can't measure or quantify it then we cannot record it! Nuance and "everything else that comprises the beauty of music" are qualities, they cannot be measured/quantified and therefore CANNOT be recorded! The only thing we can record is frequency and amplitude but of course we can choose which frequencies and amplitudes we record and, as we are human beings, we can base our choice of frequencies and amplitudes on what we perceive (judge to have quality, nuance and beauty) or, in the case of professionals, on what we believe other human beings will hopefully perceive.

In short, your descriptions of nuance, beauty and even the term "music" itself are quality judgements of human perception, not physical properties of sound and therefore not properties of recordings or the reproduction of them! Nowhere is this confusion between quantity and quality more evident in the audiophile world than in the term "Fidelity". Fidelity is the faithfulness/accuracy of a piece or pieces of equipment to reproduce an input signal (frequencies and amplitudes), effectively a comparison of input frequencies and amplitudes vs output frequencies and amplitudes which can of course be measured and quantified. However, many in the audiophile world seem to think that "fidelity" is a quality rather than a quantity, that if some bit of kit is perceived as sounding good or better than an equivalent bit of kit then it has higher fidelity. There is no direct correlation between quantity and quality and this is why we often see the exact same (or even lower) fidelity being perceived by audiophiles as better and then erroneously describing it as higher fidelity. Vinyl vs digital being just one of numerous such examples.

The opening sentence that I've quoted above is therefore not just nonsense, it's the exact opposite of reality. There is high and low fidelity (different degrees of accuracy) and absolute sound is the ONLY thing in recorded and reproduced music! How you've arrived at your conclusion and state it so absolutely and confidently is, I suppose, testament to the effectiveness of audiophile marketing!

G
 
Oct 14, 2017 at 7:55 AM Post #340 of 1,606
Very funny. So you're saying it's impossible to measure distortion with an oscilloscope and the foundations of logic are flawed. That's why this thread is going nowhere.
It IS impossible to measure distortion with an oscilloscope! If you think otherwise, then state your method!

I didn't say anything about the foundations of logic in general, I said YOUR logic is flawed.
 
Oct 14, 2017 at 9:33 AM Post #341 of 1,606
This is why formats like AES/EBU embed the sample clock in the data stream, so the downstream component is effectively clocked by the upstream. (This is also required to maintain lip sync for video.)

While I agree with most of your post, it's not related to video/audio sync. A/V sync can be quite a complex subject but to massively simplify; there is no AES/EBU sample clock in video and therefore no direct way to sync audio to video. The most common/accurate method of achieving it in the first place is to have an external master-clock generating a video speed reference (black-burst or tri-sync). This clock signal is fed into the video card which regulates it's clock accordingly and into a sync unit which, using PLL or similar schemes, regulates a sample clock for use by the ADC/DACs, we now have picture and sound regulated to exactly same speed by use of exactly the same clock source. In addition, we have SMPTE Time-Code which provides the positional reference and the end result is hopefully what is called "frame-edge sync-lock". Once sync-locked during creation, something fairly serious has to go wrong during consumer playback in order loose lip-sync. Clocking inaccuracies in any vaguely competent DAC would not be enough to account for the loss of lip-sync.

G
 
Oct 14, 2017 at 10:15 AM Post #342 of 1,606
It IS impossible to measure distortion with an oscilloscope! If you think otherwise, then state your method!

I didn't say anything about the foundations of logic in general, I said YOUR logic is flawed.

Sure, that's basic stuff. THD is simply the relative power of the harmonics of the reference signal. Inject the signal at the input, add up the power of the harmonics, measure the RMS power of the signal itself and calculate the ratio between the equivalent RMS voltage of the harmonics to the RMS voltage of the signal itself. This way the influence of noise is also elliminated. I found a neat old school manual to illustrate how to do that with an old school oscilloscope (the cover is a piece of art in itself :) ):

http://lcweb2.loc.gov/master/mbrs/r...ektronix Cookbook of Standard Audio Tests.pdf

Now why is my blonde logic flawed aside from the fact that Eistein may not have been blonde after all (maybe all white haired people are intelligent by definition)?
 
Oct 14, 2017 at 11:28 AM Post #343 of 1,606
Sure, that's basic stuff. THD is simply the relative power of the harmonics of the reference signal. Inject the signal at the input, add up the power of the harmonics, measure the RMS power of the signal itself and calculate the ratio between the equivalent RMS voltage of the harmonics to the RMS voltage of the signal itself. This way the influence of noise is also elliminated. I found a neat old school manual to illustrate how to do that with an old school oscilloscope (the cover is a piece of art in itself :) ):

http://lcweb2.loc.gov/master/mbrs/recording_preservation/manuals/Tektronix Cookbook of Standard Audio Tests.pdf
Yes, I understand what THD is. That's never been the question. Now tell me how to measure it with an oscilloscope.
 
Oct 14, 2017 at 12:05 PM Post #344 of 1,606
I just did?

ps. I hope you're not nitpicking about the semantic difference between oscilloscopes vs. spectrum analysers. I've never used an old style 'oscilloscope' that doesn't have FFT mode, but then I don't know how old are you...
 
Last edited:
Oct 14, 2017 at 12:21 PM Post #345 of 1,606
I just did?
Ah. I see the problem now. You don't understand what an oscilloscope is and what it does. It presents a graphical representation of voltage vs time, but that's all. It will display the complete waveform, voltage on the Y axis, time on the X axis. Unless you have some superhuman ability to perform a visual FFT on that waveform there is no way you can determine the relative power of the harmonics, or the RMS value of the waveform.
The manual you linked to came with the Tek 5L4N, a low frequency spectrum analyzer plugin. I know the device quite well, used it extensively in it's heyday. It's NOT an oscilloscope at all, it just uses one to present its results. You can't actually buy that device in any form today. It used a rather complex means to sweep a narrow bandwidth filter across portions or all of the audio band, and presented a logged dB display. The part of the scope that was essential was it's analog storage capability. The spectrum analyzer took many seconds to sweep the entire band, presenting its results as frequency on the X axis, and signal amplitude on the Y axis. But, since the process took time (if you learn the theory of swept spectrum analysis, bandwidth vs time response, this becomes clear) the scope only presents a slowly moving do the dot on the front of the CRT wouldn't end up showing you anything unless the scope had storage capability. That scope used a very old analog method to do that. The resulting display was ok, but lacked contrast, and you had to take a Poloroid photo of the screen with one of the Tek scope cameras (yes, had that too), to maintain a record of your results. It had issues too, like the internal log amp drifted, the extreme low frequency resolution was poor, and 20Hz calibration drifted with temperature. It was still useful, but for THD it was surpassed by Tek's AA501 THD analyzer (look that one up and learn some more), which let us input a sine wave and get an actual THD+N figure in a second or so, also providing weighting and band limiting filters and an output that you could monitor to see if the reading was made up of harmonics or contained noise.

The 5L4N is a Low Frequency Spectrum Analyzer, not an oscilloscope! Study up, know the difference.

Now, I think perhaps someone needs to stop making sweeping technical statements about things of which he lacks any clear understanding, like audio measurement, for example.

edit: nice edit you made while I was typing this. It doesn't matter, if you actually knew the difference you'd never have claimed you can measure THD with a scope. So you read your link, huh? The difference between a scope and spectrum analyzer is hardy semantic.
 
Last edited:
Status
Not open for further replies.

Users who are viewing this thread

Back
Top