Separate names with a comma.
Discussion in 'Sound Science' started by EnsisTheSlayer, Aug 30, 2017.
At your own risk.
Ok never mind
Alright all that criticism on Rob Watts. You saw nothing, nobody saw anything. We would never do such a thing.
Headfi is a little old school when it comes to personal attacks(MOT or not). but you can trash a product all you like, of course it's better if your critics are founded on something real. as far as I know Watts makes good products. I facepalm a little when I read stuff about the soundstage getting audibly better by switching from a billion taps to a gazillion taps, but it's not like I've tested it. so while I'm absolutely convinced that the direct correlation isn't the one he implies because it makes no sense whatsoever, it doesn't exclude that for reasons I don't know about, the gazillion tap design works better and perhaps really is audibly different.
then in this sub section you have an extra layer of protection from how admins want to have nothing to do with us. if someone troll a MOT in the sound science forest and there is no one to read, did it happen?
Jesus flying christ, I'm Iistening to the spotify version right now, I didn't know that properly recorded orchestras even existed. All I had listened up to now was lifeless and dull. Can't wait to get home and give it a try on the Klipshes.
Concerning chord I enjoy the proprietary design thoughts but after a/b ch1 and ch2 there are some subtle differences but i actually liked the ch1 sound. I own TT which for some reason got the opposite marketing effect and was undersold by the community as a lateral on ch1, which to my listening is far from reality. Then i have read blu mk2 at 8900 pounds was not as impressive as other renderers servers that are considerably less, although far from cheap. Then there is poly this and davina that, i wonder if spreading the butter to thin to strive for that million taps fpga this and that, is a form from the apple text book, lock em in and load em up....but capitalism is a good thing regardless.
They must have made an exception in my case.
It's even more amazing when you consider that Fiedler's Gaeitie Parisienne was recorded in 1954 when the LP record was still new and stereo hadn't even been introduced commercially yet. I think this was the second or third stereo recording made.
Just listened to Fiedler's Gaeitie Parisienne, living stereo CD. It is really good!
That was 4 years before the stereo record. Stereo recordings were already being made then, actually long before. Blumlein and Fletcher were both experimenting with stereo recording in 1932, and Fletcher/Bell Labs demonstrated 3 channel stereo in 1933. (Remember, Fletcher at Bell determined then that the minimum acceptable channel count for "stereo" was 3.) There were many others. There'd been a number of stereo film soundtracks too, the first commercial stereo release being Fantasia (1940), and continuing through the early wide screen days of the 1950s when movies were responding to loss of audience due to TV and wanted to make the movie experience into something bigger. Emory Cook was releasing two-channel stereo records in 1953 that were recorded with two separate mono grooves. Stereo tape was a German discovery, they recorded lots of stereo music in the early 1940s.
What you might be thinking of is RCA's stereo recordings, the earliest of those was 1954, so the Fiedler recording could easily have been one of the first of those. They were releasing on stereo tape a year later. No single-groove stereo records until 1958.
Yes, one of RCA's first... and RCA was the first to record in stereo for LP release. The first one RCA did I think was Stokowski but I don't think it was ever released. There's an interesting accidental stereo recording of Duke Ellington's band in the 30s. They cut two lacquers at the same time and for some reason, each one had a different mike connected to it and the mikes were standing side by side.
I went to a party at the house of the guy who discovered the accidental stereo. He said he was listening to alternate takes and realized that the balance was slightly different between the two masters, yet the performance sounded the same. He synchronized the two records by aligning each drumbeat individually. It took him a couple of months, but the results are stunning.
Firstly, many thanks to @JaeYoon for introducing me to this thread - and to @gregorio for the spirited, if somewhat vitriolic, take-down of Rob Watts' PowerPoint slides. Secondly, I hope nobody gets banned from headfi here. Anybody remember what happened to nwavguy? Well, exactly. His family don't either. Don't mess with headfi sponsors
Thirdly, let me point out a couple of things here before I get to point four. I'm hugely appreciative of the efforts of this community to de-bunk audio BS. We all know there's too much of it out there, and frankly the Chord forums are as dull as crap, with sycophants all patting themselves (and Rob) on the back on their latest Chord purchases. That being said, it's just too easy to win an argument when you're the only voice in the room. I'd love to see Rob jump in here. Not because I want to witness a food fight, but because arguing is usually the best way to establish the truth.
Four. Ok, here we go. gregorio, most of the rest of this is for you - I agree that some of Rob's points are a little hazy and/or need further explanation/elaboration - and there may even be typos in there. However, I think some of your rebuttals are a little OTT. I'll try not to go through every point, because a lot of this is really just subjective, i.e., ranking on a scale of 0 to 100 how much we think science knows about how the brain processes audio. I'm sure we can all understand that 87.25% of such statistics are just made up. (A bit of very rough estimation perhaps, but hardly a disingenuous plug for Chord's products?) However, I tend to lean towards Rob's point that current neuroscience and our understanding of intelligence and audio processing is in its infancy. Since Rob isn't on this thread to respond himself, let me do my usual thing of playing Devil's advocate and chime in with some of my thoughts on specific points raised in response to Rob's slides. (Jeez - I can't believe I'm actually defending Chord!)
Slide 3. Each sound might be separated out by the brain - if you're concentrating on it. You and Rob seem to be in agreement here. How precise our spatial location abilities are depends on your expectations. 1 or 2 degrees azimuthally seems very impressive to me, but ok, you have higher standards. Thankfully, I'm not your wife
Slide 4. Points 1 to 3 seem entirely reasonable. Perhaps, to satisfy the pedantic among us, Rob could have used the words "little understanding", rather than "no understanding" for point 1. But we're arguing semantics here. Let's look at point 2. It seems perfectly reasonable to require some margin of safety. Otherwise, you'd better have a 100% reliable way of knowing that the level you dismiss as inaudible is going to be inaudible in all circumstances. Consider artifacts a, b, c, ...z, all of which are inaudible when tested individually by a large sample of people with excellent hearing. Now what happens if you chain a+b+c+...+z? There's an excellent chance your null hypothesis is now wrong. You and Rob seem to agree on point 3. I'm with you on point 4.
Slide 5. You claim "The importance of transients varies, from little/no importance to somewhat important." Do you have examples (other than that of an infinite periodic signal) where transients are of little to no importance? In all music (except perhaps electronic music), the ADSR (attack, decay, sustain, release) is exactly what helps us distnguish one instrument from another. This is how synthesizers trick the brain into thinking a keyboard is actually a trumpet. I don't deny there are overtones that affect overall timbre, but these aren't that relevant for short, staccato hemidemisemiquavers where there is very little time for the brain to process them. I also don't know exactly how quickly the auditory nerves fire in response to an acoustic impulse. I sent you a paper about this some time ago that Bob Stuart gave me - and I also didn't find it very compelling, but let's go with 4 microseconds, just to see how this plays out...
You next say "The sampling interval does NOT define timing accuracy. This is an old, debunked audiophile myth." You and Rob may be talking about different things here. (Rob doesn't even mention timing accuracy on this particular slide.) But ok, I think I understand what you're saying. I think the problem is not knowing what Rob is actually doing to recover "better" timing accuracy. I presume he must be talking about sharpening the reconstructed transient. (Because of the 4 microsecond thing?)
Slide 6. I'm not entirely sure what Rob's driving at here either, but let me take a guess. Imagine I'm on a chromatic run with (for whatever reason) no transients at all in the reproduction of my notes - i.e., I'm slowly fuzzing from one note to the next. (Not because I'm not an outstanding bass player, but because of a poor digital reproduction.) At any point in time between me playing an E and a G, the pitch of the reproduced note is anywhere between 83.4 Hz and ~98 Hz. Are you hearing the start of the G, or a 87.3 Hz F, or a 92.5 Hz F#? I can see what he might be alluding to. Without a clear start/stop of each note, all you have is a continuous shift in pitch. It's a bit harsh to say Rob's lying. Also, I'm sure he's aware that one octave represents a doubling of the frequency. (Maybe I missed your point, but I'm not sure how that was relevant?)
Slide 7. Rob is absolutely right on this one. ADSR.
Slide 8. First four points I agree with completely. Pitch and timbre we've covered already. Starting and stopping is obvious (you're not really calling that a lie, are you?!). Yep - he's right about soundstage too - the timing differences reaching the left and right ear are a crucial part of what allow us to place sounds in 3D space.
As for the last 4 points - well, I'm not a neuroscientist, but we agreed to roll with the 4 microseconds, right? "Very small" and "very big" can be subjective, so this isn't necessarily wrong. Timing is reconstructed by the DAC's interpolation filter - I don't know how we can dispute that. If we're free to interpolate within the sample points, we're free to change the timing - at least within that interval. By which, I again presume he means the slope or rate of rise. Maybe he's adding some kind of artificial compression via the filter to generate an ultra-sharp transient?
You then say "Timing accuracy between channels is perfect at 44.1, there is zero relative shift!" If we're dealing with sound that doesn't reach the left and right ears at the exact same time, then we'd better have a relative shift, or we'll have unphysically altered the soundstage. I believe what Rob is shooting for is a consistent reproduction of the timing (to L and R channels) from a given source (but the details he gives are hazy - see my later comment on this).
Rob's "everybody says pre-ringing is bad - everybody is wrong" comment. Yeah, I didn't like the arrogance of this statement either. But we seem to be talking about different things again. You're saying not to worry, because in most (all?) cases it will be inaudible. Rob is using the pre-ringing over-sampled signal reconstruction as input to a filter which is magically (I don't know the details of the magic) going to smooth out the Gibbs phenomena and recreate the nice, sharp transient. I'm sure we all appreciate Dirac delta functions don't usually exist in music. But what if I were to record a percussive instrument like a snare - or maybe even a blast wave on my next album? To all intents and purposes, these are like the initial spike of the dirac delta to a 44 kHz sampling rate. I would absolutely want a DAC that could take care of these extreme cases, even if I only spent most of my time listening to something with a slower attack, like piano.
I don't think Rob is claiming to have invented the FIR. He's simply come up with an efficient (so they claim) way of implementing them using FPGAs, and (apparently) some kind of optimized set of coefficients. There seem to be some misunderstandings here: "The number of taps effectively defines the amount of attenuation of the stop-band signal." I would beg to differ. It totally depends what the coefficients are in the individual taps. Same here: "In the first place, adding more taps can do more harm than good". That would only be true if your coefficients were screwed up. If your coefficients were all zero, you could add as many taps as you liked as it would have no effect Generally, more taps = better, because it allows you to increase the filter order of accuracy.
I don't think the "Mysteries" slide relates to anything but a research project, so even somebody as cynical as me has a hard time thinking this is intentionally disingenuous. Reverb is whatever it is. I'd want my DAC to handle music recorded in a tiny room, and music recorded in a massive cathedral. I haven't heard these demos Rob talks about, so I can't confirm the comments about the effects on the soundstage depth. But... to reiterate a comment I left on another thread - if Rob is telling porky pies here, and nobody else but Rob can ever hear these improvements, he'll be shooting himself in the foot, because sooner or later the truth about this will come out.
As for the future - well, if this M-scale device improves the soundstage further as a result of improved transients, why would that make it flawed? A counter-argument might be that the digital artifacts we all live with now are what's really creating a fuzzy, imprecise soundstage, and that's what's flawed.
One other thing I'll add that spurred my defense of Chord here... Over the years I've done multiple listening tests comparing 44/16 rbf to various hi-res formats (96/24, 192/24 PCM and DSD64/128/256) and from the same master, I've never reliably heard a difference. However, A/Bing my various DAPs and DACs against a Chord Dave was an experience I recommend you all try at least once in your lifetime. The overwhelming difference with the Dave was its apparent ability to place instruments in space. From my other devices, the music still sounded clear, and I could still hear that the vocalist was in front of me somewhere, but the exact position is usually a bit fuzzy. Arguably, this effect isn't worth $13,500, but with the Dave, it was like the vocalist was right there in front of me - to the point where I could have reached out and touched exactly where her head was. It was spookily realistic. I don't know whether this is achieved through engineering brilliance with the timing of transients or some DSP trickery. But either way, it works.
I have no affiliation with Chord and I don't necessarily recommend their products - they're all fairly pricey. I would agree completely that Rob doesn't explain himself properly. There are two main issues I have with his presentation: 1) Can we really discern 4 microsecond transients? If not, everything else (including MQA) falls apart. But if it's true: 2) There's no mention of what his WTA filters actually do to recover these super-accurate transients. (The details are likely proprietary, which makes the technology difficult to assess with anything but the usual subjective nightmare of listening tests.) But to suggest that every line in his presentation is a lie and/or nonsense could come across as more disingenuous than Rob's original PowerPoint and does us no favors. IMHO
P.S. gregorio - I completely agree with you about Trump. I know it's not politically correct to talk about politics. It wasn't PC to talk about politics during the rise of the Nazis in Germany. And that's exactly why we had the rise of the Nazis in Germany. So I make no apologies for pointing out the obvious here - that our current president is a dangerous, ignorant, narcissistic, self-serving conman with no ethics or morals, who isn't fit to serve even in MacDonalds. Even if we get lucky and the Russians take him out for failing to keep his end of the bargain, it'll take decades to undo the damage he's doing to this country and the planet. Trump is easily the worst example of a human being I've witnessed in my lifetime. I've seen more worthy single-cell lifeforms in stagnant pond water. IMHO, it's a terribly low blow to liken Rob Watts to Trump.
Too long! But I'll answer one point by asking another question... If current neuroscience and our understanding of intelligence and audio processing is in its infancy, don't you think that it's a lot more likely that we will learn about how the brain processes sound that we can hear, rather than sound that we can't even detect through carefully controlled listening tests? It seems to me that sound we can hear is what matters.
I'm with bigshot on this one. If a DAC is really making an obvious difference with the sound, I'm returning it. Though, I would do my best to try and blind test it first, to see if that obvious difference might vanish.
What's your reference point for the most perfect-sounding DAC, against which anything else (even if it seemingly sounds better) must be an aberration?
The half-dozen or more modern DACs I have listened to all sound the same to me. The measurements I've seen show that a reasonably well designed DAC should not have any sonic characteristics that differ from one unit to the next. I have not seen any evidence to suggest anyone else is hearing a difference between these DACs. For me, that is more than enough convincing. I'm here, so I am always interested in new information that might change my attitude, but your post is not doing it for me. Nothing new has been introduced that hasn't already been claimed several times before.
Well, my KSE1500 DAC definitely sounds different from my external DAC - and that's very easy to do an SPL-matched A/B with, because you can just switch from the line-in to a flat EQ, which flips on the internal DAC. But I hear you - I'll admit the differences are small. With the Dave, the differences I heard were more significant, but it's the whole package here because Dave is not just a DAC - at least I didn't test it with the KSE1500's line-in. My tests were done with Focal Utopias, which also used the Dave's amp.
I don't own a Dave. I'm not planning on buying a Dave. I'm not trying to push Chord products. I'm simply suggesting listening to the Dave - and A/Bing against your existing DAC(s). If you hear no difference, you'll have just saved yourself $13,500 + tax + shipping.