- Joined
- Jan 30, 2011
- Posts
- 13,315
- Likes
- 24,362
So we all have reviewers we follow – people we trust not to steer us wrong – people who have similar tastes, and even like the same types of music.
But reviewers are human, they are fallible, and they can fall victim to bad judgement, over-enthusiasm, personal bias, and placebo just like anyone else. I've been meaning to write this one for a while, and recent events with a pair of earphones have prompted me to get off my butt and finally finish it. It'll cover things like ego, like sighted bias, like plain old lack of experience – but it'll also be about how to improve.
And yes – the subject here is me.
Stepping back in time - an early lesson
So lets skip back to my early reviewing on head-Fi – before people would send me stuff, before I ever had a front page review, when I was virtually unknown. I honestly can't remember what I was reviewing, or even who the person was who pulled me up – it was a long time ago. But I happened to make the comment about female vocals being most prominent in the 2-3 kHz area. I was so sure of myself, and stated it big and bold. And then I got pulled up on it, and set straight. I was also told bluntly that I didn't have a clue what I was talking about. But how can that be? Changing EQ in that area drastically changes the tonality. Yes dummy (me) – that's presence, harmonics if you like – its not fundamentals. After being sufficiently put in my place, I proceeded to spend the next few months using the interactive frequency chart and other sites trying to understand where I'd gone so wrong. I'd spend hours at my PC with my HD600 playing tracks I really knew well, and altering frequency response through EQ – first single bands, then multiple ones – finding out what changes, where instruments “play' in terms of frequency response, and how different changes can alter your perception. It was one of the most brutal lessons I learnt – but its also one of the most valuable – especially now. And it was about this time that I started learning how to read a frequency chart, and really learning what my preferences were.
Not the most rapid learner though!
Fast forward a couple of years, and I'm starting to be noticed. I have companies approaching me to review their products. I'm being involved with tours. People are following what I write.
Let me repeat that – people are following what I write.
That is a big responsibility that I recognise now – but going back a couple of years ago, I was probably too consumed about making a name for myself, and by simple ego. Seems the lessons of the past aren't always learnt the first time. I do think I'd improved as a reviewer by then. I understood my own bias a lot better, and I was trying to be more objective about what I was doing, and I was relying on measurements rather than by ear alone. This is all good. So where is it leading?
Well as Chris (HawaiiBadBoy's) video reviews so eloquently state – it was about this time that Brooko – knew he'd [expletive] up. Only I didn't know it – not when I wrote it. I do now.
Noble was kind enough to tour a Savant, and I got the chance to spend just under 2 weeks with them. I actually wrote a pretty honest review – and I was dead set sure I'd covered all the angles. I loved the IEMs – but I wrote something which then got parroted quite often. I said they had a problem with the sub-bass.
So I need to paint a picture before I continue so that you know now what I didn't realise then.
So what happened? This beautiful sounding IEM was critiqued by me because of something I saw on a graph before actually noticing aurally – placebo anyone? After graphing it, and there is no way my graphs were 100% accurate (although I did not know it at the time), I then compared it with other IEMs with enhanced sub-bass, and incorrectly drew the conclusion that the Savants were sub-bass light.
Here is my initial graph.

Here is my comparison graph.

It wasn't until many months later (with my current measurement rig) that I got to measure them properly. And this time here is the correct measurement.

Big difference huh?
Note that even this one isn't correct above 4-5 kHz - as my coupler is not calibrated 100% - but I'm pretty confident with everything below that. Also know that this is raw uncalibrated data.
I made a very bad call, I did it publicly, and I was wrong – very wrong. I since updated the review with the new graph, and if I ever get the chance again with the Savant – I'll rewrite the review completely – but this time with far wiser eyes/ears and a more open mind.
I take this opportunity now to apologise unreservedly to Noble, to the people who may have been misled by the review, and particularly to Dr John Moulton.
And a special note of thanks to Jude who (when we were having a chat about measurements) was gracious enough to nicely give me some advice about what I did wrong.
Reviewing is a learning game – none of us is perfect – and especially not me.
So what about now?
So have I improved? I'd like to say yes – but I still make mistakes along the way. I'm a lot more aware of them nowadays though – and more importantly I'm very open to being corrected, and also going back to correct mistakes. Take my Brainwavz S3 review – have a look for the corrections in red (I think I updated those 3-4 months ago). None of us is bulletproof – we all have faults, bias and ego. The measure of the reviewer (in my eyes anyway) is how much we are open to correcting those errors and learning from them.
Fast forward to the present time, and if anyone has seen the QT5 thread, they'll notice some real discrepancies. I only got involved because someone mentioned the new Fidue Sirius as being overpriced (funnily enough they hadn't heard them yet still tendered that opinion) – when there are other 5 driver hybrids around at a fraction of the price. The QT5 was mentioned so I investigated. I found someone in NZ who had a pair, arranged to swap them for my 64Audio Adel U6 for a week, and proceeded to review them. They are among the worst IEMs I've reviewed in my entire time as a reviewer (not the worst – but getting there). The review is here – if anyone is interested.
The point is that they were touted (by more than one source and on other review sites than Head-Fi) as being basically 5 star earphones. Since I reviewed them I've had a lot of PM messages thanking me for posting the review and expressing the wish that I'd had the chance to review them earlier (before they had bought them). The stories have all been the same. They were expecting something amazing and got something disappointing. I'm only one data point - but the general consensus seems to be emerging that either ZhiYin's QC and consistency is all over the place, or that reviewing standards need to be lifted. I'm not pointing fingers - it could genuinely just be the QC (I do have my doubts though given the picture that seems to be emerging).
What can we do?
So for those prospective or current reviewers out there – my advice is just to be aware that the advice we give in our reviews leads to people spending real money. We owe it to them, and to ourselves to question (continually) everything we write. We need to be more objective (and that means all of us – and especially me). And if we make mistakes – we need to own them, and we need to correct them.
We can all lift the standard – but to do that we also need to recognise that we all have room to improve.
Thanks for reading.
Paul
But reviewers are human, they are fallible, and they can fall victim to bad judgement, over-enthusiasm, personal bias, and placebo just like anyone else. I've been meaning to write this one for a while, and recent events with a pair of earphones have prompted me to get off my butt and finally finish it. It'll cover things like ego, like sighted bias, like plain old lack of experience – but it'll also be about how to improve.
And yes – the subject here is me.
Stepping back in time - an early lesson
So lets skip back to my early reviewing on head-Fi – before people would send me stuff, before I ever had a front page review, when I was virtually unknown. I honestly can't remember what I was reviewing, or even who the person was who pulled me up – it was a long time ago. But I happened to make the comment about female vocals being most prominent in the 2-3 kHz area. I was so sure of myself, and stated it big and bold. And then I got pulled up on it, and set straight. I was also told bluntly that I didn't have a clue what I was talking about. But how can that be? Changing EQ in that area drastically changes the tonality. Yes dummy (me) – that's presence, harmonics if you like – its not fundamentals. After being sufficiently put in my place, I proceeded to spend the next few months using the interactive frequency chart and other sites trying to understand where I'd gone so wrong. I'd spend hours at my PC with my HD600 playing tracks I really knew well, and altering frequency response through EQ – first single bands, then multiple ones – finding out what changes, where instruments “play' in terms of frequency response, and how different changes can alter your perception. It was one of the most brutal lessons I learnt – but its also one of the most valuable – especially now. And it was about this time that I started learning how to read a frequency chart, and really learning what my preferences were.
Not the most rapid learner though!
Fast forward a couple of years, and I'm starting to be noticed. I have companies approaching me to review their products. I'm being involved with tours. People are following what I write.
Let me repeat that – people are following what I write.
That is a big responsibility that I recognise now – but going back a couple of years ago, I was probably too consumed about making a name for myself, and by simple ego. Seems the lessons of the past aren't always learnt the first time. I do think I'd improved as a reviewer by then. I understood my own bias a lot better, and I was trying to be more objective about what I was doing, and I was relying on measurements rather than by ear alone. This is all good. So where is it leading?
Well as Chris (HawaiiBadBoy's) video reviews so eloquently state – it was about this time that Brooko – knew he'd [expletive] up. Only I didn't know it – not when I wrote it. I do now.
Noble was kind enough to tour a Savant, and I got the chance to spend just under 2 weeks with them. I actually wrote a pretty honest review – and I was dead set sure I'd covered all the angles. I loved the IEMs – but I wrote something which then got parroted quite often. I said they had a problem with the sub-bass.
So I need to paint a picture before I continue so that you know now what I didn't realise then.
- I had been listening to a lot of triple hybrids for a while and while I thought I knew what quality bass sounds like – I know now I was heavily skewed toward the IEMs I'd had experience with. And quite a few of them had enhanced sub-bass.
- I measured the Savant (I spent hours doing it) – but my measurement rig at the time consisted of an SPL meter, some tubing, test tones, and a spreadsheet. I measured different frequency responses, and basically used those to build a graph using C weighting and then a conversion table provided by Head-Fiers twj321 and DJScope to show what I'd found.
- And now the biggie – I created the graph before I did the critical listening
So what happened? This beautiful sounding IEM was critiqued by me because of something I saw on a graph before actually noticing aurally – placebo anyone? After graphing it, and there is no way my graphs were 100% accurate (although I did not know it at the time), I then compared it with other IEMs with enhanced sub-bass, and incorrectly drew the conclusion that the Savants were sub-bass light.
Here is my initial graph.
Here is my comparison graph.
It wasn't until many months later (with my current measurement rig) that I got to measure them properly. And this time here is the correct measurement.
Big difference huh?
Note that even this one isn't correct above 4-5 kHz - as my coupler is not calibrated 100% - but I'm pretty confident with everything below that. Also know that this is raw uncalibrated data.
I made a very bad call, I did it publicly, and I was wrong – very wrong. I since updated the review with the new graph, and if I ever get the chance again with the Savant – I'll rewrite the review completely – but this time with far wiser eyes/ears and a more open mind.
I take this opportunity now to apologise unreservedly to Noble, to the people who may have been misled by the review, and particularly to Dr John Moulton.
And a special note of thanks to Jude who (when we were having a chat about measurements) was gracious enough to nicely give me some advice about what I did wrong.
Reviewing is a learning game – none of us is perfect – and especially not me.
So what about now?
So have I improved? I'd like to say yes – but I still make mistakes along the way. I'm a lot more aware of them nowadays though – and more importantly I'm very open to being corrected, and also going back to correct mistakes. Take my Brainwavz S3 review – have a look for the corrections in red (I think I updated those 3-4 months ago). None of us is bulletproof – we all have faults, bias and ego. The measure of the reviewer (in my eyes anyway) is how much we are open to correcting those errors and learning from them.
Fast forward to the present time, and if anyone has seen the QT5 thread, they'll notice some real discrepancies. I only got involved because someone mentioned the new Fidue Sirius as being overpriced (funnily enough they hadn't heard them yet still tendered that opinion) – when there are other 5 driver hybrids around at a fraction of the price. The QT5 was mentioned so I investigated. I found someone in NZ who had a pair, arranged to swap them for my 64Audio Adel U6 for a week, and proceeded to review them. They are among the worst IEMs I've reviewed in my entire time as a reviewer (not the worst – but getting there). The review is here – if anyone is interested.
The point is that they were touted (by more than one source and on other review sites than Head-Fi) as being basically 5 star earphones. Since I reviewed them I've had a lot of PM messages thanking me for posting the review and expressing the wish that I'd had the chance to review them earlier (before they had bought them). The stories have all been the same. They were expecting something amazing and got something disappointing. I'm only one data point - but the general consensus seems to be emerging that either ZhiYin's QC and consistency is all over the place, or that reviewing standards need to be lifted. I'm not pointing fingers - it could genuinely just be the QC (I do have my doubts though given the picture that seems to be emerging).
What can we do?
So for those prospective or current reviewers out there – my advice is just to be aware that the advice we give in our reviews leads to people spending real money. We owe it to them, and to ourselves to question (continually) everything we write. We need to be more objective (and that means all of us – and especially me). And if we make mistakes – we need to own them, and we need to correct them.
We can all lift the standard – but to do that we also need to recognise that we all have room to improve.
Thanks for reading.
Paul