o0genesis0o
Headphoneus Supremus
Yea your closing word, it’s not bad if you don’t have $50 good IEM. Is basically one sentence summary of cadenza. Definitely not “sub $50 king” as hyped. This is my final conclusion after tip rolls, and 100hrs of burn in to Cadenza.
Attention calling for ongoing trend of “frequent collab reviewer”.
cadenza hit HBB’s top sub 50 ranking list based on:
▶︎Tonality tuning fits on his preferred curve (warm bass, inoffensive treble, neutral mid)
▶︎Possible business incentives of potential future collaboration with Kiwi Ears. Well at least Cadenza isn’t going to depreciate his HBB brand, not a bad IEM, so there will be a good monetary motivations. (I’m not indicating he is being bribed by Kiwi Ears. It’s most likely his voluntary hype)
Same business incentive applies to Crinacle’s latest ranking list. I see some “unusual” positive/negative biased scores that could be rooted from his own collaborations motivations. Not many, but countable few of them.
From professional ethics standpoint, these conflicts of interests are not good ingredients for their reviews. As one human being it is hard to completely separate personal interest with unbiased reviews. In business world, the professional firm providing “business consulting” shall be separated from the professional firm conducting objective audits.
That’s what I see from HBB/Crinacle’s once-purely-personal review has been transformed to somewhat “95% of them sounds fair, but some questions remained” one. That 5% is a bit phony, as they are not “review amateurs” that can mistakenly assess the score with more than 10% of margin of errors in the actual performance. The positive/negative bias is beyond margin of errors, which makes me wonder if there are any intentional biases are involved.
I’m not implying
“Don’t believe Reviewer’s score who does —constant—collaborations/consultations “
Just take it with a gain of salt.
Noting that there maybe monetary/intangible motivations and incentives behind self-proclaimed unbiased review.
Then apply subjective adjectives to “potential positive/negative biases” of their review product, to estimate the realistic result. This is called “Stress Test” technical analysis. Apply stress to potentially biased dataset, salvage recap result.
Back to Cadenza, it can be used as daily IEM to watch TV shows on tablet. A good dynamic theater like sound reproduction. Technically is just average. If not a bit rough textured detail articulation and mushy bass.
Recap of Cadenza:
Not bad. Not a game changer. If you own good $50-class single DD, it’s not worth to buy. If not, it’s a fair offer. But there will be better choice, like Tripowin’s Lea that could be found as low as $23 vs Cadenza’s $35.
I have just seen this one. Boy, that’s serious
No comment on the bias thing, especially since some people make a living writing these reviews.
I think that this race to the bottom is getting out of hands. We get it, you (chi fi manufacturers) nail the tuning. No need to brag about good tuning on a budget, it’s not 2021 anymore.
Give us good technical performance, give us good build quality. Perhaps those things are not as cool as “look how nice we tune” or making a statement “you don’t need to pay for tuning” (personally, who cares? If you have poor technical performance, you still create poor listening experience even if you hit Harman 100%. If you have outstanding technical performance, you can get away with slight tuning mistakes)