All reviews are subjective, of course :) (Personally, I think that they wouldn't be fun to read otherwise!)
Nonetheless I tried to limit this:
First, I awarded only component scores, not overall scores. The overall scores (including category scores) are just a mathematical average of the individual scores. I found that personally, biases were greatly lessened when I was grading IEMs in minute categories, like 'Mids Detail', since even with one's favourite signature one can sometimes find specific things to dislike. So I graded IEMs again and again across different criteria, and had no idea what the final scores would be until I'd gone through every single criteria. A good indication of what kind of effect this had- when I was asked by friends initially to rate them in order of general preference, I found myself ranking them in an order that ended up being very different from the final results I'm posting in Fit for a Bat.
Second, I scored them relatively. Obviously this was only possible because I had a wide range of great IEMs to set up comparative benchmarks with, and for that I must say I'm really lucky. Often times differences only revealed themselves when I did direct AB comparisons between IEMs. So I would listen to one and go, 'hmm, IEM x has pretty detailed mids'. Then I'd listen to IEM y, and go, 'well, this is just slightly better ain't it?'. This is important because it means that I couldn't just listen to one IEM, decide I like it a lot, and then proceed to give it really good scores for every component. Nosir. Each score was awarded, one component at a time, only after comparing against every other IEM in the shootout.
Hope this clarifies! I always wondered if my first post glossed over this a bit too much, but well, I ultimately decided nobody wants to read too much into the fine print :)
Whatever your method, thank you for throwing piles of money at people so we could have a good read! You are quite the gentleman.