- Joined
- Jan 30, 2011
- Posts
- 13,315
- Likes
- 24,362
Disclaimer
This is an opinion piece – purely my opinion. Feel free to discuss, to agree, to disagree. But please keep it polite, and keep the emotion to a reasonable level. I am calling out the system here – not necessarily the people.
Intro
Warning – this is a “rant”. Its been building for a while, and I can't hold it back any more. It is my opinion that the SNR ratio among a small regular bunch of reviewers is getting so bad lately that something has to be said, or we may as well just say that Head-Fi is a purely advertising site for the poorly informed.
Brooko – calm down. Surely you're one of these people too, and are you saying all reviewers are like this?
No I'm not saying that, although I will state up front that I am not perfect (none of us are) – reviewing after all, at its heart, subjective.
So lets go back a bit to the heart of the matter.
History on my journey
When I first joined Head-Fi almost 7 years ago, I did it for the audio discussion, and because I wanted to learn about what was out there, so I could make informed decisions about my audio chain. I was very green, very naive, and pretty much looked at peoples post count as a measure of accuracy as to who I could trust. I would also look at multiple reviews and look for consensus.
Back then, there were few reviewers getting review samples, and few tours. Companies like FiiO were just getting started. The explosion of gear out of China into the west hadn't really started. I started reviewing my own gear and becoming comfortable with the community I was in. Soon after that I was approached by a company to see if I wanted to review some of their cheaper IEMs. Great – good chance to hear more gear, and it meant I didn't have to pay (growing family meant I couldn't afford to).
First reality check – I reviewed something and was talking about the mid-range and where vocals sat and I got pulled up by someone who basically told me I was spouting a bunch of nonsense. We PM'd – he told me where I was going wrong, where I had knowledge gaps, and basically told me that if I didn't understand audio and what I was writing, how could I review gear. He was right. I look back on some of my earlier reviews and I cringe. So I started changing my style, asking questions and learning. Along the way I started recognising many of my own personal bias points, and putting them in my own reviews. I stopped trusting solely what I thought I was hearing, and started using measurements to check and also help me explain.
I also recognised that what I was writing was potentially influencing people's buying decisions. This was slammed home to me when someone made a purchase based on one of my early reviews, and then PM'd me to state that I shouldn't be reviewing, that my review was inaccurate, and that it was their worst buy ever. That PM ended with the accusation that I was shilling. I'll never forget that. From that time my attitude to reviewing changed. I started writing what I thought a buyer would want to know – or at least what would be important to me if I was buying. Strangely enough, from that point, I started getting people following me (or following my reviews).
A matter of trust
As a reviewer, the greatest words you can hear (usually via pm) are pretty much the following:
Trust is huge. I read a lot of reviews, but I gloss over (or miss entirely) the reviewers I no longer have any trust for. And why don't I trust them? – because they are inaccurate. If I can't trust what they are saying, then it is pointless reading what they say.
So the system should weed out those are inaccurate right? Those who don't enable trust should be less read, and the ones who do should rise above. But that is where the system is broken. I mean what do you do when a group of reviewers is all saying the same thing, and they are all getting a lot of exposure? They must be right if they all concur, and if they all keep making the front page right? So lets explore that thought a little further. Lets talk about SNR (the signal to noise ratio).
The problem
This is a generalisation – and not how do I do it – but a summary of how a reviewer can feel, and where the pitfalls are.
I am reviewer. I want to be able to write about my experiences with gear, but I can't afford it. So I start writing about the gear I can, and I start joining tours, and start soliciting samples. However – if I write anything negative in the reviews, the manufacturers may drop me, and the likelihood is that Head-Fi won't feature the reviews. If I can't get the samples, and I can't get the exposure – then my budding “career” is over.
So how do I solve it? When I review I mainly talk about the good points, I “gloss over” the bad ones, and I rank a lot more leniently than I should. I also chuck in a lot of subjective “glowy” type descriptions. And inadvertently I make a lot of subjective claims about the gear regardless of whether they are true or not. Because I have to fill the review with something right? In short, I have written the review for the manufacturer, and to meet Head-Fi's standards for making the front page (good pictures, very positive, high score).
But what have I sacrificed? The review is now not accurate (the noise has got to a point where it masks the true signal), and my description is partly a work of fiction.
But hang on Brooko – that's BS. No-one could get away with this. X is also a reviewer and I trust his/her opinion, surely you're talking about a small sub-set? No – I'm talking about a mind-set, and its getting worse. If we don't stop it now, it will continue to grow. Lets look at some examples.
Some examples
This is not meant to name and shame – just to bring it to people's attention.
The ZhiYin QT5 saga
What happened – a group of regular reviewers had the IEM, it was being hyped up massively, and my attention was piqued when someone claimed it was better than the Fidue A91 Sirius (both 5 driver hybrids). Luckily there was a fellow kiwi who had a pair, so I arranged to swap with my U6 for a week, and I reviewed it. It is easily the worst IEM I have ever heard. At the time, all the reviews and comments were describing a 4 star IEM. I got comments afterward from a variety of people how bad it was – but no-one was willing to speak up.
The Kinera H3
This isn't a really bad IEM. It has a lot of potential in fact. I'm pretty sure a couple of reviews have already made the front page. My hasn't so far – and likely won't. Why – because although I believe the review is accurate, it is not hugely positive. The IEM sounds really good if you extensively EQ it, but is has a hard drop in the fundamental, mid-range, and early and sharp rise at 1-2 kHz, and far too much lower treble. There have been 13 reviews so far. 5 are 5 star, 6 are 4 star, 1 is 3.5 star, and mine is 2.5 star. So mine is the outlier right? Almost all of the other reviews were free samples (mine was too), and it is my sincere belief that most of the reviews and the scores were written with a natural bias toward the manufacturer. Why does this one really resonate with me? Penon were the ones who approached me about the Kinera H3. They gave me my choice of IEMs to review and I chose the H3 based on the reviews and feedback. If I was an actual buyer I'd be thoroughly peeved by now. Interestingly enough I've already had feedback from a couple of buyers confirming what I listed as some of the issues in the review.
Other instances
Just recently I got into a debate when someone else on an external review site claimed that the balanced output of a particular DAP with a particular IEM changed the frequency output. It doesn't (and I can prove it – I have the same gear). The response was – he relies on his ears and doesn't believe in measurements. ***. It's not about a competition – its about accuracy.
And the latest one was a debate about an X7ii review. The claim – with the AM3A amp, it is U shaped. I refuted it, pointed to the measurements, and suggested the need for accuracy. Once again I got back the usual “well thats what I heard”. The issue – he's listening with a U shaped IEM. I pointed out that shouldn't make any difference when describing the DAP. He next brought up Fletcher Munson (rolls eyes) and then claimed that the impedance of the DAP is influencing things. I pointed out that his IEM has a flat impedance curve, is a DD, and that would have no influence either. The point I'm trying to make is that many of the “newer crop” of reviewers are either making things up, or they are so invested in their own opinions, they simply don't know how to review with a shred of objectivity.
The dilemma
I don't know if you remember, but a little while ago Lachlan (Lachlan likes a thing) raised many of the same points about manufacturer bias, and what we are all doing wrong as reviewers. I debated him at the time – and I still would. His idea was to ask people to crowd-fund him. That way he was writing for them (more accurate), and we could stop the situation of reviewers getting unpaid for review samples. Of course his idea (to me anyway) is patently ridiculous. You'd end up with only one of two reviewers and really speaking all he's doing is feathering his own nest.
The dilemma really is that people want reviews that are accurate, but people like me can't afford to buy everything. Not only that – people want comparisons, which means you virtually have to keep a lot of the gear so you can do side-by-side volume matched tests. Unless you're rich – you have to rely on manufacturers.
Tours are good, but I'm amazed how many of the bigger reviewers don't do them – I know at least a couple who will not do a tour unless they get to keep the sample (I won't name names – but what I am stating is fact). That is their prerogative, but I think it is telling as to the reason for doing it. I do tours because otherwise I won't get to hear the product. Campfire is a good example and I genuinely thank them for the opportunity. The issue of course with tours is that when I later get asked to compare products, I can't. The tour item is gone.
Some regard keeping the sample as deserved reward for the time spent on doing a review. What? If you're reviewing for the gear, you shouldn't be doing it. Its the same as shilling. You are writing a review for a reward (keeping the gear).
So how do you get accurate reviews of the gear without radically changing the rules?
The “catch 22” is that you're expected to be overly positive if you want to review and can't afford it. And you are penalised if you write a more accurate review. In both cases its ultimately the manufacturer who is complicit in deciding how things are handled, and who should learn that accurate feedback will ultimately help them to better themselves. But also Head-Fi admins could (and IMO should) change the landscape, and start recognising that both positive and not so positive reviews are both important to be highlighted. I doubt that will happen – and it's be the subject of another blog post at some stage soon (to do with my future here).
So what do I do (personally)?
Here is simply an example of how I do things. Its not perfect, but some of the ideas may help.
I don't want to get people's backs up with this. I'd rather have a dialogue and explore the issues, and how we solve them. I'm analytical – that’s simply how I work. There will be people out there who regard this as an attack, so I will simply state this. If you're a reviewer – have a look at your profile, and how many people are following you. They should be an indicator. The more following you, the more who are interested in what you say. Yes the following will be built over time – but mostly it will be built on trust. It should be one of the most important statistical numbers on Head-Fi if you are a reviewer. I'm not going to state them – look for yourself. And look at your own compared to other reviewers. Make a personal choice. What you do with your reviewing will influence that number.
So how do we fix this?
I guess that’s the whole point of this blog post.
How do we do it? Lets get some dialogue going. I take this stuff very seriously. Its one of the reasons I gave up my Moderator status recently (I was accused of using my position to put down others with discussing gear – there are other reasons too, but that was one of them). To this day I think the critique was unfair and inaccurate, but I had to eliminate it as a possibility and my reputation as a reviewer is too important to me.
Every critique I've had (including accusations of shilling from one or two people) have caused me to look at what I'm doing and how I'm approaching things.
The two biggest things which should help is that manufacturers should look at what they want from a review. If they regard Head-Fi as merely advertising (and this may be the truth of it), then the current system will continue.
And Head-Fi administration need to decide if their current system of reviewing is to continue. If they reward a low SNR with front page exposure, then they are setting the ROE, and the website will continue to get the types of reviews which are becoming more and more prominent, and less and less useful to buyers.
Ultimately if things continue in their current vein, I will follow through on my current decision to fade out early next year. I've already talked to a couple of manufacturers about this – interestingly they've asked me where my new home will be as they'd like to continue the relationship. Again – that little word trust. When you have it between readers and manufacturers, you know you are in a good space.
So lets discuss. How do we improve things (reviewing) before it goes so far its irrecoverable?
This is an opinion piece – purely my opinion. Feel free to discuss, to agree, to disagree. But please keep it polite, and keep the emotion to a reasonable level. I am calling out the system here – not necessarily the people.
Intro
Warning – this is a “rant”. Its been building for a while, and I can't hold it back any more. It is my opinion that the SNR ratio among a small regular bunch of reviewers is getting so bad lately that something has to be said, or we may as well just say that Head-Fi is a purely advertising site for the poorly informed.
Brooko – calm down. Surely you're one of these people too, and are you saying all reviewers are like this?
No I'm not saying that, although I will state up front that I am not perfect (none of us are) – reviewing after all, at its heart, subjective.
So lets go back a bit to the heart of the matter.
History on my journey
When I first joined Head-Fi almost 7 years ago, I did it for the audio discussion, and because I wanted to learn about what was out there, so I could make informed decisions about my audio chain. I was very green, very naive, and pretty much looked at peoples post count as a measure of accuracy as to who I could trust. I would also look at multiple reviews and look for consensus.
Back then, there were few reviewers getting review samples, and few tours. Companies like FiiO were just getting started. The explosion of gear out of China into the west hadn't really started. I started reviewing my own gear and becoming comfortable with the community I was in. Soon after that I was approached by a company to see if I wanted to review some of their cheaper IEMs. Great – good chance to hear more gear, and it meant I didn't have to pay (growing family meant I couldn't afford to).
First reality check – I reviewed something and was talking about the mid-range and where vocals sat and I got pulled up by someone who basically told me I was spouting a bunch of nonsense. We PM'd – he told me where I was going wrong, where I had knowledge gaps, and basically told me that if I didn't understand audio and what I was writing, how could I review gear. He was right. I look back on some of my earlier reviews and I cringe. So I started changing my style, asking questions and learning. Along the way I started recognising many of my own personal bias points, and putting them in my own reviews. I stopped trusting solely what I thought I was hearing, and started using measurements to check and also help me explain.
I also recognised that what I was writing was potentially influencing people's buying decisions. This was slammed home to me when someone made a purchase based on one of my early reviews, and then PM'd me to state that I shouldn't be reviewing, that my review was inaccurate, and that it was their worst buy ever. That PM ended with the accusation that I was shilling. I'll never forget that. From that time my attitude to reviewing changed. I started writing what I thought a buyer would want to know – or at least what would be important to me if I was buying. Strangely enough, from that point, I started getting people following me (or following my reviews).
A matter of trust
As a reviewer, the greatest words you can hear (usually via pm) are pretty much the following:
- “I trust your opinion”
- “You tell it like it is”
- “I trust you – what do you recommend”
Trust is huge. I read a lot of reviews, but I gloss over (or miss entirely) the reviewers I no longer have any trust for. And why don't I trust them? – because they are inaccurate. If I can't trust what they are saying, then it is pointless reading what they say.
So the system should weed out those are inaccurate right? Those who don't enable trust should be less read, and the ones who do should rise above. But that is where the system is broken. I mean what do you do when a group of reviewers is all saying the same thing, and they are all getting a lot of exposure? They must be right if they all concur, and if they all keep making the front page right? So lets explore that thought a little further. Lets talk about SNR (the signal to noise ratio).
The problem
This is a generalisation – and not how do I do it – but a summary of how a reviewer can feel, and where the pitfalls are.
I am reviewer. I want to be able to write about my experiences with gear, but I can't afford it. So I start writing about the gear I can, and I start joining tours, and start soliciting samples. However – if I write anything negative in the reviews, the manufacturers may drop me, and the likelihood is that Head-Fi won't feature the reviews. If I can't get the samples, and I can't get the exposure – then my budding “career” is over.
So how do I solve it? When I review I mainly talk about the good points, I “gloss over” the bad ones, and I rank a lot more leniently than I should. I also chuck in a lot of subjective “glowy” type descriptions. And inadvertently I make a lot of subjective claims about the gear regardless of whether they are true or not. Because I have to fill the review with something right? In short, I have written the review for the manufacturer, and to meet Head-Fi's standards for making the front page (good pictures, very positive, high score).
But what have I sacrificed? The review is now not accurate (the noise has got to a point where it masks the true signal), and my description is partly a work of fiction.
But hang on Brooko – that's BS. No-one could get away with this. X is also a reviewer and I trust his/her opinion, surely you're talking about a small sub-set? No – I'm talking about a mind-set, and its getting worse. If we don't stop it now, it will continue to grow. Lets look at some examples.
Some examples
This is not meant to name and shame – just to bring it to people's attention.
The ZhiYin QT5 saga
What happened – a group of regular reviewers had the IEM, it was being hyped up massively, and my attention was piqued when someone claimed it was better than the Fidue A91 Sirius (both 5 driver hybrids). Luckily there was a fellow kiwi who had a pair, so I arranged to swap with my U6 for a week, and I reviewed it. It is easily the worst IEM I have ever heard. At the time, all the reviews and comments were describing a 4 star IEM. I got comments afterward from a variety of people how bad it was – but no-one was willing to speak up.
The Kinera H3
This isn't a really bad IEM. It has a lot of potential in fact. I'm pretty sure a couple of reviews have already made the front page. My hasn't so far – and likely won't. Why – because although I believe the review is accurate, it is not hugely positive. The IEM sounds really good if you extensively EQ it, but is has a hard drop in the fundamental, mid-range, and early and sharp rise at 1-2 kHz, and far too much lower treble. There have been 13 reviews so far. 5 are 5 star, 6 are 4 star, 1 is 3.5 star, and mine is 2.5 star. So mine is the outlier right? Almost all of the other reviews were free samples (mine was too), and it is my sincere belief that most of the reviews and the scores were written with a natural bias toward the manufacturer. Why does this one really resonate with me? Penon were the ones who approached me about the Kinera H3. They gave me my choice of IEMs to review and I chose the H3 based on the reviews and feedback. If I was an actual buyer I'd be thoroughly peeved by now. Interestingly enough I've already had feedback from a couple of buyers confirming what I listed as some of the issues in the review.
Other instances
Just recently I got into a debate when someone else on an external review site claimed that the balanced output of a particular DAP with a particular IEM changed the frequency output. It doesn't (and I can prove it – I have the same gear). The response was – he relies on his ears and doesn't believe in measurements. ***. It's not about a competition – its about accuracy.
And the latest one was a debate about an X7ii review. The claim – with the AM3A amp, it is U shaped. I refuted it, pointed to the measurements, and suggested the need for accuracy. Once again I got back the usual “well thats what I heard”. The issue – he's listening with a U shaped IEM. I pointed out that shouldn't make any difference when describing the DAP. He next brought up Fletcher Munson (rolls eyes) and then claimed that the impedance of the DAP is influencing things. I pointed out that his IEM has a flat impedance curve, is a DD, and that would have no influence either. The point I'm trying to make is that many of the “newer crop” of reviewers are either making things up, or they are so invested in their own opinions, they simply don't know how to review with a shred of objectivity.
The dilemma
I don't know if you remember, but a little while ago Lachlan (Lachlan likes a thing) raised many of the same points about manufacturer bias, and what we are all doing wrong as reviewers. I debated him at the time – and I still would. His idea was to ask people to crowd-fund him. That way he was writing for them (more accurate), and we could stop the situation of reviewers getting unpaid for review samples. Of course his idea (to me anyway) is patently ridiculous. You'd end up with only one of two reviewers and really speaking all he's doing is feathering his own nest.
The dilemma really is that people want reviews that are accurate, but people like me can't afford to buy everything. Not only that – people want comparisons, which means you virtually have to keep a lot of the gear so you can do side-by-side volume matched tests. Unless you're rich – you have to rely on manufacturers.
Tours are good, but I'm amazed how many of the bigger reviewers don't do them – I know at least a couple who will not do a tour unless they get to keep the sample (I won't name names – but what I am stating is fact). That is their prerogative, but I think it is telling as to the reason for doing it. I do tours because otherwise I won't get to hear the product. Campfire is a good example and I genuinely thank them for the opportunity. The issue of course with tours is that when I later get asked to compare products, I can't. The tour item is gone.
Some regard keeping the sample as deserved reward for the time spent on doing a review. What? If you're reviewing for the gear, you shouldn't be doing it. Its the same as shilling. You are writing a review for a reward (keeping the gear).
So how do you get accurate reviews of the gear without radically changing the rules?
The “catch 22” is that you're expected to be overly positive if you want to review and can't afford it. And you are penalised if you write a more accurate review. In both cases its ultimately the manufacturer who is complicit in deciding how things are handled, and who should learn that accurate feedback will ultimately help them to better themselves. But also Head-Fi admins could (and IMO should) change the landscape, and start recognising that both positive and not so positive reviews are both important to be highlighted. I doubt that will happen – and it's be the subject of another blog post at some stage soon (to do with my future here).
So what do I do (personally)?
Here is simply an example of how I do things. Its not perfect, but some of the ideas may help.
- I don't approach manufacturers. I let them come to me. This will not be acceptable to a lot of reviewers – especially those starting out, and I can understand that. But to me, your rep should speak for itself. I can influence buying decisions with what I write – but that depends on how truthful and transparent I am. So the way I do it is to create distance from the manufacturer. I've written less than positive reviews on FiiO, Dunu and a number of manufacturers. The good ones come back – simply because they recognise that you are actually helping them develop better products, and you're not afraid to tell them your subjective truth.
- I recognise (as much as possible) and state my bias. I then work to eliminate them as much as possible to become as neutral as possible when I review. If you ignore your natural bias, you'll never be accurate.
- I treat the review samples as that – samples. They are not mine. And I don't do it for the gear. To this day, the gear I use mostly for my own pleasure is the gear I have purchased (or won in competitions). The stuff that is legally mine. HD800S, HD600, K553, MS Pro, iDSD, Curve, q-Jays, Dipper, LSR305's. If I like something I've reviewed so much that I want it for pleasure – I buy it. The only exception to this (which I am aware of) is the FiiO DAPs. FiiO won't allow me to buy anything off them any more (they won't accept payment from me). They use me for software testing, reviewing, and critical feedback. I've spent $100's on their gear in the past, and they expect me to be completely honest – and I can live with this.
I tell every company that I will return the gear at their cost whenever they so desire. But they also know that leaving it with me – means more mileage (good or bad) through further product comparisons in the threads and subsequent reviews.
- I always volume match when reviewing. If you don't (with accuracy), then (to me anyway) your review is a technically flawed opinion piece. I can't trust anything you say about relative comparisons. And I'll simply read a review for the features.
- I use my ears to describe what I hear, but I use measurements for consistency, to check accuracy, and to understand what I am hearing. It is very cheap to get a reliable set-up, and I'm prepared to help anyone who wants to calibrate for a bit more accuracy. I personally think that if you want to be a good reviewer you simply must use all the tools at your disposal. If you don't have them – get them.
- I state where and how I got the product
- I take the time to get to know the product, and I also re-calibrate with more neutral gear while I'm reviewing. I know I can get used to anything over time (that marvellous “brain filter”), so its important to recognise this and correct it.
- I take critique and try to improve. I also recognise that my subjective opinion is unique to me. So a certain measure of objectivity is essential.
- I try to state the good with the bad and generally be gentle but accurate about the critique. You shouldn't take pleasure in pointing out the errors - merely be factual. Everyone makes mistakes (manufacturers). Help them improve, but also make sure your readers are your target. They are the ones you have to be truthful to.
I don't want to get people's backs up with this. I'd rather have a dialogue and explore the issues, and how we solve them. I'm analytical – that’s simply how I work. There will be people out there who regard this as an attack, so I will simply state this. If you're a reviewer – have a look at your profile, and how many people are following you. They should be an indicator. The more following you, the more who are interested in what you say. Yes the following will be built over time – but mostly it will be built on trust. It should be one of the most important statistical numbers on Head-Fi if you are a reviewer. I'm not going to state them – look for yourself. And look at your own compared to other reviewers. Make a personal choice. What you do with your reviewing will influence that number.
So how do we fix this?
I guess that’s the whole point of this blog post.
How do we do it? Lets get some dialogue going. I take this stuff very seriously. Its one of the reasons I gave up my Moderator status recently (I was accused of using my position to put down others with discussing gear – there are other reasons too, but that was one of them). To this day I think the critique was unfair and inaccurate, but I had to eliminate it as a possibility and my reputation as a reviewer is too important to me.
Every critique I've had (including accusations of shilling from one or two people) have caused me to look at what I'm doing and how I'm approaching things.
The two biggest things which should help is that manufacturers should look at what they want from a review. If they regard Head-Fi as merely advertising (and this may be the truth of it), then the current system will continue.
And Head-Fi administration need to decide if their current system of reviewing is to continue. If they reward a low SNR with front page exposure, then they are setting the ROE, and the website will continue to get the types of reviews which are becoming more and more prominent, and less and less useful to buyers.
Ultimately if things continue in their current vein, I will follow through on my current decision to fade out early next year. I've already talked to a couple of manufacturers about this – interestingly they've asked me where my new home will be as they'd like to continue the relationship. Again – that little word trust. When you have it between readers and manufacturers, you know you are in a good space.
So lets discuss. How do we improve things (reviewing) before it goes so far its irrecoverable?
Last edited: