neilvg
Headphoneus Supremus
- Joined
- Jun 10, 2004
- Posts
- 1,585
- Likes
- 41
IMHO there is something being left out in this argument that we need to consider. I myself, and I'm sure many others in this forum, have some experience with studio recordings and mastering houses. I've luckily been able to use some of the tools which are often used on today's records both for mixing, mastering and remastering. And here is what I have to say:
When doing some sort of enhancement (spatial, tonal, etc...) the 'electronic enhancement', is generally added on a very content specific basis. Especially when remastering a project, most engineers will actually do quite a bit of research as to the 'genetic makeup' so to speak of the actual recording and apply complementary, but modern day, processing that allows it to retain its original signature, but enhance those qualities lost in the translation, or somehow just simply lost or diminshed.
Let me give you one clear example of why this is important:
Let say you have a recording with a vocal that is panned 30 degrees to the left and a vocal image with a 1.2 ms delay which is panned 30 degrees to the right. The subjective experience would be a slightly fatter sound emanating from the middle of our sound field, but without a specific pinpoint. The part of the vocal performance that crosses over into each other's sound field (remember hard left and hard right are essentially like two completely different outputs, our ears do the combining in the case of music when heard together) will have a certain phase cancelation that in many cases actually serves the song or the vocal. It can actually thicken it up, or mask a particular trait (nasal vocalist, etc...).
Now let us say we apply a generalized crossfeed circuit which blindly takes the left data and puts it "in the right speaker" and puts the right data "in the left speaker". This will certainly be creating a large amount of undesirable phase cancellation. Whereas before this was a specific enhancement for the vocal, now it is being generalized and in a sense duplicated on on the other side of the field, thus effectively blurring the whole effect and god knows what else in specific situations. If you want listening pleasure, some performances can sound 'better' or 'different'. But many enjoy clarity and accuracy, along with good old wholesome fun, and thus, the circuit may not be for them.
Actually, you can perform, if so desired, the test yourself. Just take the output before and after the circuit and do a PAZ analysis on the waveform with your favorite Spectrum Analyzer using a C weighted curve.
Subjectively, I tend to hear more bass, but it tends to be a little "sloppy" and I tend to hear a dimished sense of clarity and imaging, and in turn I get a sort of warmth that allows me to jack the volume up a bit. In some recordings, such as old beatles records, the circuit is awesome and really helps, in other records (most records), it's simply makeup for poor equipment.
...my two cents on this one
Neil
When doing some sort of enhancement (spatial, tonal, etc...) the 'electronic enhancement', is generally added on a very content specific basis. Especially when remastering a project, most engineers will actually do quite a bit of research as to the 'genetic makeup' so to speak of the actual recording and apply complementary, but modern day, processing that allows it to retain its original signature, but enhance those qualities lost in the translation, or somehow just simply lost or diminshed.
Let me give you one clear example of why this is important:
Let say you have a recording with a vocal that is panned 30 degrees to the left and a vocal image with a 1.2 ms delay which is panned 30 degrees to the right. The subjective experience would be a slightly fatter sound emanating from the middle of our sound field, but without a specific pinpoint. The part of the vocal performance that crosses over into each other's sound field (remember hard left and hard right are essentially like two completely different outputs, our ears do the combining in the case of music when heard together) will have a certain phase cancelation that in many cases actually serves the song or the vocal. It can actually thicken it up, or mask a particular trait (nasal vocalist, etc...).
Now let us say we apply a generalized crossfeed circuit which blindly takes the left data and puts it "in the right speaker" and puts the right data "in the left speaker". This will certainly be creating a large amount of undesirable phase cancellation. Whereas before this was a specific enhancement for the vocal, now it is being generalized and in a sense duplicated on on the other side of the field, thus effectively blurring the whole effect and god knows what else in specific situations. If you want listening pleasure, some performances can sound 'better' or 'different'. But many enjoy clarity and accuracy, along with good old wholesome fun, and thus, the circuit may not be for them.
Actually, you can perform, if so desired, the test yourself. Just take the output before and after the circuit and do a PAZ analysis on the waveform with your favorite Spectrum Analyzer using a C weighted curve.
Subjectively, I tend to hear more bass, but it tends to be a little "sloppy" and I tend to hear a dimished sense of clarity and imaging, and in turn I get a sort of warmth that allows me to jack the volume up a bit. In some recordings, such as old beatles records, the circuit is awesome and really helps, in other records (most records), it's simply makeup for poor equipment.
...my two cents on this one
Neil