Why do so many great albums sound so bad?
Jan 22, 2012 at 3:46 AM Post #106 of 156


Quote:
You can pretty much disconnect the fans if you are not continuously using 400W
biggrin.gif
(yes, that was the answer of the guy at Yamaha), which pretty never happens in living rooms.
 
Did you measure in quasi-anechoic conditions? Could it be a room coupling issue?
 
 

 
I'll admit freely that the truth is I don't like the Yamaha's look at all - that's reason enough for me to not want one.  I'm sure it sounds fine though.  Is it stable into 2 ohm loads?

It was in-room response.  Although I have not done near-field measurements, I've gotten the same result in other rooms (including when I first auditioned them a year and a half ago), and in the current room both before and after treatment with bass traps.  It is a broadband, -6 dB drop of the woofer relative to the mid-bass/midrange/tweeter compared to level matching amps by ear.

The below chart shows the in-room (treated) response with the Eico HF-12 amps level matched with the GFA-555 by ear to balance the ~50 Hz mode with the ~67 Hz and ~110 Hz nulls (which yes, are much deeper/higher without smoothing), and then the response with just the Adcom GFA-555 (matching levels at 1 kHz).  The crossover point for the woofer to the mid-bass coupler is at 200 Hz, and I left the Eico's tone controls at neutral for this test (normally I like about -1 on the treble tone control on the Eico, which brings everything above 3 kHz or so down approximately 2 dB).  The linearity of these speakers in the midrange in-room is impressive...  I'm not sure exactly how much of the top octave of treble drop-off is the mic (using a new model RS digital SPL meter) and how much is the room.  The tweeter (and midrange) are the same as these speakers, which have no trouble maintaining flat response in anechoic conditions.  The woofer's roll-off is approximately what is expected of the speakers (specified as -3 dB at 27 Hz), although obviously the room response dominates from about 40 Hz to 125 Hz or so.
 

 
Here is my room for reference; dimensions are approximately (off the top of my head) 23' x 12' x 8', with the speakers approximately 2' off the rear wall and the left speaker approximately 2' off of the side wall.  The 23' width dimension goes to the right of the photo so the speakers are off-center in the room, although there is a wall dividing up part of what you can't see.  Another bass trap tower (they're made of bales of cellulose blow-in insulation covered in felt) is located in the corner just to the left of the photo, and more bass traps along the bottom wall there.  The bass traps helped tame the nulls in the bass considerably as well as reducing decay time - resulting in a much tighter and less fatiguing sound.  Notice the improvement in the room over this
biggrin.gif

 

 
Yes, I should do near-field measurements at the very least.  I haven't had the time to set up a repeatable procedure for doing so yet.  However, I am absolutely confident that it's not a problem specific to this room.  The peaks and nulls in the bass response are, of course - but not the broadband -6 dB drop.
 
I suspected it was a crossover issue, but after tearing one of them apart and the NPEs on the bass crossover measuring so close to nominal (there are some in-line), I'm not so sure.  You can see the crossover schematics here (click "Renaissance 90" on the right), although the actual values in my crossover differ slightly.
 
Here's a shot of the bass crossover after I pulled it out but before I started unsoldering the caps to check them (and I had planned on replacing them, but the Erse NPEs I got were even more out of tolerance than the 20 year old TI ones!).
 

 
 
 
Jan 22, 2012 at 4:22 AM Post #107 of 156
I'm halfway tempted to say that the Eico has a high output impedance than the Adcom and thus causes a voltage drop for the upper and above, resulting in a stronger bass overall compared to the Adcom alone.
But I suppose you have already excluded this conjecture, so really, I have no idea.
 
Jan 22, 2012 at 4:36 AM Post #108 of 156


Quote:
I'm halfway tempted to say that the Eico has a high output impedance than the Adcom and thus causes a voltage drop for the upper and above, resulting in a stronger bass overall compared to the Adcom alone.
But I suppose you have already excluded this conjecture, so really, I have no idea.


I level match the amps by hand using the gain controls on each.  Did the same thing with one of my Carver TFM-15CB amps before I got the Eicos.
 
It does have a higher output impedance - should be just under 1 ohm or so with the 8 ohm speaker taps.  Relative to the Adcom (ignoring the speaker impedance curve), the higher output impedance just means I have to turn the gain up a tiny bit higher to get the same level.
 
Jan 23, 2012 at 6:59 PM Post #110 of 156
Jan 23, 2012 at 10:30 PM Post #111 of 156
Jan 25, 2012 at 7:15 AM Post #112 of 156
It flies against all logic to think that it would be able to remove distortion.
 
Jan 25, 2012 at 8:28 AM Post #113 of 156
Mathematically, it's quite possible to consider the clipped point as points of non data and interpolate what happens between them, you'd thus have removed the distortion and replace it with something else, the key-point is how that something else sounds.
 
(Yes, it is still distortion, but it could sound better)
 
Jan 25, 2012 at 12:04 PM Post #114 of 156
I haven't tried it yet but the theory is fairly sound.
 
It obviously won't be able the reconstruct the exact data that's been clipped because it is indeed gone forever but there's no reason it can't do a decent job of estimating what should have been there.  It won't be perfect but there's no reason it can't be better than leaving the clipping in place.  Of course I have no idea how well this is actually implemented but I do know its not impossible to do what it claims to do.
 
A lot of people have trouble understanding this sort of thing.  Audio is already hard to describe because its transient and when trying to combine it with the math that underlies digital audio and DSPs a lot of people just get lost, not because they're dumb, but because without some training or self study they lack the vocabulary to either accurately describe their thoughts or understand a technical jargon-filled explanation.  Its an issue of communication not intelligence.  I've started using visual analogies lately to describe this sort of thing.  A picture is easier to talk about and describe.  All of us who are sighted have a much larger area of our brain devoted to visual processing than auditory processing which makes us inherently better at it without any special training or experience.  When you add the fact that pictures are static and can very easily be studied in detail its no wonder that the visual is easier to describe than the sonic.  You can literally just point to something and say, "See? Its there."
 
Just think of a DSP as some sort of fancy Photoshop filter or effect.  Say you have a picture of yourself on vacation in front of some famous landmark.  Your camera takes a picture and after that your are never going to derive any more data regarding the actual event from just the file on the memory card.  You can analyze it closely and discover non-obvious information but without the addition of any more data about the same event you cannot, in a strict sense, make the picture any more accurate.  Since you were actually there and have memories of what actually happened as well as some basic knowledge of errors a camera can commonly add you can use that information to improve its accuracy.  You could adjust the white balance to match your memory and everyone knows that red eye is an artifact of the camera so removing it would make the picture closer to what actually occurred.
 
You can get a lot more advanced than that though.  What if you add another person to the picture?  When done with enough skill it will be impossible to detect by simply looking at that one picture.  To detect the addition a person will have to have some sort of additional knowledge which could be anything from knowing where the second person was when you went on that trip to devising mathematical tests that detect the use of Photoshop's algorithms in the final product.  Furthermore without that knowledge no one would be able to tell which one was the original.  Was the second person there in reality and removed via Photoshop or was he not there and added with Photoshop?  Someone can tell you that one of the pictures is real but how would you know which one it is with only the two pictures as evidence?
 
This de-clipping DSP can work in the same way.  The actual data was lost at the point of clipping but a good approximation could be made.  With a well implemented DSP of this type I could take a track with no clipping, increase the level to make it clip a little, and then process it with the de-clipping DSP.  If I gave both the original and the clipped-then-de-clipped tracks to someone who had never heard them before and simply told them that one was original and one was digitally processed it is likely that they would not be able to tell which is which.  They might be able to tell that they are different but they won't know which one is correct without more information.  Its just like the picture scenario above.
 
Things get messy in actual practice because we already have already have expectations or knowledge about what things should sound like which can vary a lot according to what the music is.  If the music is strictly acoustic and unamplified then anything that sounds "processed" will stand out far more than in something like industrial rock or metal where pretty much everything already has an effect on it anyway so there is a spectrum of how severe the clipping can be before it can no longer be masked which varies by genre.  Fortunately the music that tends to be the most clipped also tends to be the easiest for small distortions to hide in.  In general, for small and infrequent clipping such a DSP could either be indistinguishable from the "lost" un-clipped version if the differences between the interpolation and the original are too small to be audible or noticeably different but good enough that you couldn't tell which one was which without prior knowledge if the difference between the original and the interpolation are large enough to be audible but not large enough for the interpolations to sound unnatural and out of place.  More frequent and severe clipping would of course make the processed version more easily identifiable but since you're stuck with the mastering that gets released for a particular album the interpolation just has to sound better than the hard clipping to be useful and that shouldn't be to hard.
 
When I get more free time I'm going to try this out and see how well its actually implemented.
 
Also, I know the analogy isn't perfect for a lot of reasons but the example are more about the similarities of manipulating digital data whether its audio or video and not about directly comparing photoshoping another person into a picture to what happens when an audio signal clips.  A direct comparison of clipping to photography would be overexposure.
 
Jan 25, 2012 at 12:18 PM Post #115 of 156
One listen to the album I love to hate:
Bruce Springsteen's "Magic" makes me wonder how do you remove all that distortion?
 
Maybe I'll try the de-clipper, but this mastering seems to have more problems than just plain old clipping.
 
Jan 25, 2012 at 2:47 PM Post #116 of 156
Well its never going to make a terrible track into a perfect one but it is possible to make it better.
 
Jan 27, 2012 at 12:18 AM Post #118 of 156


Quote:
An article that mentions some recent well-mastered recordings: http://dynamicrangeday.co.uk/award/


I really dug this album, might pull it out for another listen. 
Just curious this article seems to be based solely on dynamics, is that all that is involved in the mastering process or is equalization also manipulated during this process? Or does this vary depending how badly the record company has been affected by the Loudness War?
 
EDIT: Do the sound engineers focus more on dynamics than equalization I mean?
 
Jan 27, 2012 at 1:34 AM Post #119 of 156
Well, a mastering engineer is normally is involved with setting final EQ and dynamics of the mix as a whole. Another person might be responsible for mixing and yet another for mic placement, hooking up stuff. The producer might have input on the bands "sound". There are no hard rules. All these factors contribute to the final sound obviously, including musicianship, quality of the the bands gear...I could go on.
 
Jan 27, 2012 at 12:38 PM Post #120 of 156


Quote:
I really dug this album, might pull it out for another listen. 
Just curious this article seems to be based solely on dynamics, is that all that is involved in the mastering process or is equalization also manipulated during this process? Or does this vary depending how badly the record company has been affected by the Loudness War?
 
EDIT: Do the sound engineers focus more on dynamics than equalization I mean?



Mastering has a lot to do with making all the songs on an album sound similar.
Often a record is recorded over a period of a few days or weeks.
The finished product the Producer hands over to the mastering engineering may have problems such as:
 - different songs mixed slightly differently, making the album sound a bit incoherent, more reverb, les reverb, more bass, less bass from song to song.
 - different songs mixed a different levels, so the mastering engineer will adjust levels from song to song so they all have the same volume.
 - different songs compressed more or less than other songs
the record company (or producer or artist) may direct the mastering engineer to compress the whole album to make it as loud as possible.
The mastering enginer usually has the advantage of always mastering an album in the same room all the time, i.e same acoustics, same speakers.
The producer may work in different studios for different artists (or even different songs), so the producer may not really know the acoustics of the studio he has mixed in the way the mastering engineer understands the sound of the room the mastering engineer always masters in. 
 
Really the mastering engineer is a final set of ears listening to the album before it is transferred to CD (or other format).
 
 
 

Users who are viewing this thread

Back
Top