Ambience recovery system for headphone listening based on K1000
Oct 14, 2005 at 4:13 AM Post #16 of 29
While I was looking into topics related to ambience, I borrowed a few books from our university library and was surprised to learn about the importance of room acoustics on our basic perception of sound and music. I also realized that ambience reproduction is one of the most challenging aspects of music playback. It is also shocking that the most basic concepts in textbooks about room acoustics and spatial perception are rarely mentioned in audiophile press and discussion boards. Therefore I think it is good to share a few important things I have learned from the books.

The most basic thing to know about room acoustics is probably this: in an ordinary room the acoustic energy reaching our ears is more from reflected and diffracted sound than from direct sound, if the source is more than a few feet away. Here is a simple figure from a book that illustrates this phenomenon:

reflected%20sound.gif


Figure A is a hypothetical source playing several sustained notes of fixed amplitude. Figure B shows the amplitude detected by a recording microphone in a room with moderate reverberation. Figure C is the amplitude measured in a more reverberant room. (Olson, Music, Physics and Engineering, 2nd Ed., Dover, New York, 1967)

It obvious that a simple on-off sound actually shows a pattern of attack, sustenance and decay in a real room. At the loudest moment, the energy reaching the ear has more contributions from reflected or diffracted sounds than direct sound. We know there is more to a sound than just amplitude--there is a time domain as well. Different frequencies of sound interact differently with the room, and therefore a complex sound comprised of multiple frequencies look very different in its waveform after just a few reflections, and soon becomes unrecognizable. On top of that, the relative position of the ear (microphone) to the source in the room has a huge effect on what is being detected. Even moving the microphone by a few inches can have drastic effects on frequency response and waveform shape. When a sound is made in a room and detected by a microphone a few feet away, the only predictable signal is the direct sound that travels at 1100 feet per second to the mike, and the initial waveform being recorded will closely resemble the original sound. Right after that reflected sound starts to dominate and the waveform becomes very unpredictable. It appears that inside any room other than an anechoic chamber, the ear will hear nothing but chaos. However, we don’t hear chaos in a real room. We can clearly identify people’s voice and their speeches, and we can differentiate the sounds of different instruments, in basically any room. In fact, we rarely notice the acoustical property of a room directly, unless there are obvious echoes (for instance, in a cave) or something. How does the ear-brain cope with the acoustic complexity due to room reflections?

As discussed earlier, direct sound reaches the ear before the reflected sound, and the waveform of the direct sound is basically unadulterated. Early reflected (a few bounces) sounds will have waveforms which are distorted but still correlated with the direct sound, and the brain will automatically merge the direct sound and correlated early reflections into a single sound, if time difference is less than ~60 ms. This is called precedence effect, or Haas effect. In fact, even if the echo within 60 ms is a few dB louder than direct sound, the person still hears a single sound, the direct sound only. If the reflected sound reaches the ear after 60 ms, it will be perceived as an echo. When one shouts at a big, flat wall in an open space, he can hear an echo only if he is far enough from the wall—we all know this from experience. Precedence effect says this distance has to be larger than ~30 feet for an echo to be heard. In a living room, there are still reflected sounds reaching the ear after 60 ms, but these sounds have been reflected and diffracted enough times that they become uncorrelated with the original sound. Their waveforms have become so distorted that they are no longer recognized by the brain as a part of the original sound, and instead are perceived as diffused reverberation. In a cave, however, the stones have flat, hard surfaces that reflect very efficiently without little distortion, and the ear keeps receiving correlated waveforms after 60 ms and interprets them as echoes.

The term ambience really includes two kinds of reflected sounds: early reflection (correlated) and reverberation (uncorrelated). Reverberation is late reflection that becomes uncorrelated with the original sound. We all know that reverberation is important for music since good orchestral halls all have reverberation time of around 2.2 seconds (reverberation time is defined as the time for reflected sound to become 1000 times, or -60 dB, less loud than the direct sound). Early reflections turn out to be even more important for sound quality than reverberation. When the ear first hears the direct sound, the brain can determine its location by two major mechanisms: interaural delay and head-related transfer function. The explanations for these two phenomena can be easily found by Google, so I will skip them. However, the brain also relies on early reflections to further confirm the source’s location and its spatial relation to room boundaries and objects. This readily explains why some close-miked studio recordings lack spatiality. When a monoaural source recorded without ambience is placed to the left or right by simple panpotting, there is no early reflected sound to further convince the brain it is really there. The brain is expecting early reflections but finds none, which causes spatial confusion. It is important to realize that although early reflected sound within 60 ms can’t be heard separately, it actually changes the perceived quality of the direct sound. When early reflected sound is heard, the direct sound is interpreted as being louder, clearer, warmer and more three-dimensional (all of which are qualities desired by audiophiles). Bob Katz has an excellent chapter in his book Mastering Audio about ambience and audio quality, and I learned a lot from there.

I hope the preceding discussion is enough to convince everyone that it is important to capture the natural ambiance in a recording (of acoustic music at least). Ambience conveys clarity, warmth, dimensionality and the sense of realism. Looking back at the figure shown earlier, we can see that if ambience is missing, the attack, sustenance, and decay of instrumental sounds will change significantly. It is not possible to faithfully reproduce the timbre of an instrument if ambience is too much missing. In the next post I will try to explain why stereophonic recordings cannot capture sufficient ambiance to simulate an actual concert experience. There are some inherent physical limitations associated with stereophonic playback, and I will also try to discuss why using headphones can makes things better in some cases but worse in most cases. Stay tuned….
 
Oct 14, 2005 at 5:40 AM Post #17 of 29
Quote:

Originally Posted by Ferbose
(Olson, Music, Physics and Engineering, 2nd Ed., Dover, New York, 1967)


Anything by Harry F. Olson, longtime eminence grise of RCA, is worth a look, or a read, or a listen.
 
Oct 14, 2005 at 6:48 AM Post #18 of 29
Quote:

Originally Posted by wualta
Anything by Harry F. Olson, longtime eminence grise of RCA, is worth a look, or a read, or a listen.


He he...
I did not realize Mr. Olson is a true audio enginner guru.
biggrin.gif

I found this link about him:
http://www.ieee-virtual-museum.org/c...=1234715&lid=1

Wow, this guy has some credentials!
 
Oct 16, 2005 at 6:34 AM Post #19 of 29
Quote:

Originally Posted by Ferbose
I did not realize Mr. Olson is a true audio enginner guru.
biggrin.gif

Wow, this guy has some credentials!



He was an engineer's engineer. That little article doesn't even skim the surface.
 
Oct 16, 2005 at 6:53 AM Post #20 of 29
Should the type of speakers matter? Say, mediocre computer speakers?

Do you think you could try it out?
smily_headphones1.gif


Thanks.. this thread has gotten me intriguied over the K1000s yet again.
 
Oct 17, 2005 at 7:06 PM Post #22 of 29
Quote:

Originally Posted by akwok
Should the type of speakers matter? Say, mediocre computer speakers?

Do you think you could try it out?
smily_headphones1.gif


Thanks.. this thread has gotten me intriguied over the K1000s yet again.



I think it does.
Since I only have one pair of speakers to experiment on, I can't compare different speakers.
When I roll tubes for my speaker amp, I notice some tubes makes the integration better than others. So I think timbre matching between the speakers and K1000 matters.
 
Oct 29, 2005 at 6:10 AM Post #23 of 29
In this part I will discuss why stereo recording technique can’t incorporate sufficient ambience information, and some remedies for this.

Monophonic recording

Before discussing stereo recordings, let us first examine its older sibling, the mono recording. We all know that mono recordings do not sound too natural, but why? Some may think it is because it can’t create left-right sound separation like stereo recordings. Actually, that’s not the most fundamental reason. Think of a real classical concert. At the balcony seat in a large hall, a string quartet simply sounds like a mono source, with no L/R separation between the instruments. Does it sound bad? Not at all. Therefore the lack of L/R separation is not that objectionable. In fact, even at the fifth row seat in front of an orchestra, we never hear pin-point L/R separation of instruments. Precise L/R localization we hear on a good stereo system is actually an artifact, albeit a desirable one in the context of stereo playback. In a real concert, each instrument would sound much more diffuse, instead of occupying a precise position in space, because of hall ambience. The reason that stereo recording portray various instruments with pinpoint precision is, I believe, due to the lack of ambience, and the reason that mono recording sounds unnatural is also due to the lack of ambience. To understand why, we need to briefly mention a well-known psychoacoustic principle: the masking effect.
The masking effect says that a louder sound will cover up a softer sound in the same frequency range. The masking is even stronger when the two sounds come from the same direction. Sounds totally intuitive, doesn’t it? In part II, I already mentioned that in a real concert hall we hear something like 70% ambience and 30% direct sound. Why doesn’t direct sound get masked? Well, the direct sound reaches the ear first and the brain uses Haas effect to integrate early reflections with direct sound. In addition, ambience comes from many different directions while direct sound comes from distinct locations. In monophonic playback ambience and direct sound comes from the same direction, and masking effect becomes very serious. To avoid the masking of direct sound, engineers are forced to capture less ambience in the first place. The lack of ambience in mono recording explains why it sounds dry and unnatural.


Stereophonic Recording

Progressing from mono to stereo recording, ambience can be done much better. If the sound source is on the right channel, its reflection and reverberation can be recorded on the left channel. At a later time point, the reflected sound can appear at the right channel again, due to side wall reflections. The masking effect becomes much less severe than in mono recordings, and this is why stereo sounds much more natural and replaced mono recordings as soon as stereo technology reached the mass market. However it is obvious that ambience in stereo playback comes only from two directions, instead of a 360 degree sphere around the listener in a real concert. Stereo recordings contain something like 70% direct sound and 30% ambience, which means it still contains much less ambience information than what a human ear would receive in a real concert. It is generally thought that minimalist recording technique using coincident stereo microphone captures dimensionality the best. However, even in minimal miking techniques, the microphones have to be very close to the stage. If the microphone is placed at where the audience would sit, it would pick up too much reverberation and sound blurred. From the preceding discussion it is obvious that stereo recording is not perfect, or ever meant to be perfect. How can we do better?


Beyond two channels

The fact that stereo recording cannot reproduce the spatiality of an actual acoustic performance has long been known. However, there is no simple solution for it, and hence the market is still dominated by stereo recordings. One apparent solution is to add to more channels to the recording and playback. The biggest problem with multi-channel music is the uncertainty in playback environment. The cost and the complexity of setting up a top-notch multi-channel system at home are formidable. Multiple speakers have to be placed very precisely in the room to attain optimal surround effects, which is not very practical for most people. Anyway, no multichannel music format has won the heart and soul of the average audiophile, not to mention the average consumer.

Since stereo recordings have been dominating the market for decades, and most people’s music collections are largely in stereo, it would be great if the some of the lost ambience can be recovered during stereo playback. This is actually possible and can be seen in products such as Dynaquad or Dolby Pro Logic. In these cases, ambience can be either recovered or simulated. For example, if right channel signal is subtracted from left channel, or vice versa, direct sound is prone to be canceled out and ambience would remain. The ambience signal can be played through the rear channels to enhance ambience during stereo playback. To simulate ambience, a right channel sound can be added to left channel with a delay and then mixed back to the right with a second delay and so forth, which would simulate a sound bouncing between the walls. Of course, what is being done in, for instance, Dolby Pro Logic II is more complicated than what I described, but the basic idea is there. When done properly, ambience enhancement can work extremely well, and Dolby Labs calls it magic surround. Most audiophiles probably object to such approaches, thinking it is too artificial, and indeed Dynaquad and Pro Logic never became too popular. However, few audiophiles realize that most of their records have gone through ambience enhancement during mastering. Too often CDs are recorded in acoustically dead studios, and the missing reverberation is later artificially generated using reverb processors. The advantage of the Dynaquad idea is that the front channels are 100% unaltered and signals sent to rear channels are all passively generated. Nothing is lost from the stereo but ambience is added. Unfortunately, it is no longer available in this age of digital processing (Pro Logic) and multichannel music (DTS, SACD, DVD-A). I just got my Dynaco Quadaptor from Ebay and am pleased by the improvement it brings.


Headphones

If stereophonic recordings have lackluster performance in terms of ambience, what happens when they are played through headphones? They get even worse in terms. Humans differentiate the direction of the sound by two major mechanisms: interaural delay and head-related transfer function (HRTF). Interaural delay means a sound from the right will reach the right ear first and the left ear a bit later. HRTF means a sound interact with, and get modified by, the shoulder, the head, the torso and the outer ear before reaching the eardrum. Interaction with the outer ear is the biggest factor in HRTF. People have very different ear shapes (look at those around you), and hence very different HRTF. When we hear actual music, we use these two mechanisms for spatial localization. Stereophonic recordings are meant to be played through stereo speakers, and when we listen to stereo speakers these two mechanisms also contribute to spatial hearing. Stereophonic recordings played through headphones cannot convey proper spatiality because these two mechanisms no longer remain valid. Right ear signal does not reach left ear a bit later, or vice versa, and hence no interaural delay. Also, a lot of sound goes directly from the driver to the eardrum, with little interaction with the outer ear and the body, and the natural HRTF is abolished.

To overcome the shortcomings of listening to stereo recordings on headphones, manufacturers have implemented various tricks. To simulate interaural delay, some headphone amps have crossfeed circuits. Most headphones are also equalized according to some kind of averaged HRTF to make them sound more natural with stereophonic recordings (inevitably intended for speakers). No good headphone has a flat frequency response above 2 kHz, because its frequency response must compensate for the HRTF factor. Since people have very different HRTF, it is no surprise that headphone preferences vary greatly from person to person. Some headphones like R10 even use angled drivers to increase outer ear interaction. K1000 is the only headphone that allows aforementioned mechanisms to operate normally on the head. It has natural crossfeed and outer ear interaction, and indeed conveys spatiality better than any other headphone I have heard. This is not to say that K1000 is the best headphone out there, because there are others important things in audio in addition to spatiality.

Binaural Recording

Let us face it: outside the realm of head-fi, not many audiophiles consider headphone listening a worthy route to audio nirvana. Headphone related equipment is treated as accessories in hi-fi magazines. The idea of a headphone amplifier is foreign to most average consumers. For most people, speaker is the most natural way to listen to music, not headphone. This is totally understandable because stereophonic recordings are produced in such a way as to sound natural on speakers. It is in fact possible to create two-channel recordings specially suited for headphones. It is called binaural recording. Binaural recording has been around for a long time, and you can read about it in many books about musical acoustics. Unfortunately, it is basically nonexistent on the market, which is truly a sad thing for head-fi enthusiasts. In its purest form, binaural recording is recorded using a dummy head with omni-directional microphones in its fake ears. When played back, it should be heard with in-ear monitors with a flat frequency response. While stereophonic recordings sound unnatural through headphones, binaural recordings sound unnatural through stereo speakers, with the latter case being more severe. This explains why binaural recording never met any commercial success. Modern binaural recordings are made to be compatible with speakers and real-world headphones (no one has in-ear monitors with flat frequency response).

Binaural recordings have a very spacious and enveloping feeling to it, although the instruments still sound very close to the head. In my limited experience, binaural recording create the surround-sound effect even better than multiple speakers. Every headphone enthusiast has to buy at least one binaural recording as a treat for his ears. I wish someday there will be a larger demand for binaural recordings. Binaural recording completely eliminates the in-your head feeling during headphone listening, give back instruments their natural timbre. It is a known fact that binaural recording is the only technique that can properly capture the actual ambience of the room. Acousticians have used binaural recordings to record and evaluate the acoustics of different convert halls. In addition, for reasons I don’t understand, bass is also much stronger in binaural recordings. However, it is very difficult to create binaural recordings from studio recorded and edited material, which explains why so few binaural recordings are commercially available.

Conclusion:

The take home message is: stereophonic system has inherent flaws in terms of ambience reproduction, but the flaws are much less disruptive than in monophonic systems. There is no simple remedy to this, because going multichannel increases complexity and cost, and standardization becomes difficult. These flaws are exacerbated for headphone playback, and again there is no simple remedy for this. Binaural recordings on the other hand, sound unbelievable on headphones, creating very natural ambience, but not as good on speakers. Only binaural recording, but not stereophonic recording, can capture the actual ambience of a room.
 
Oct 29, 2005 at 4:11 PM Post #24 of 29
Thanks for the tip Ferbose. I tried it and found that my experience mostly agrees with yours. In my setup I use a set of active nearfield studio monitors- the JBL LSR-25P, a matching LSR12P subwoofer and my K-1000 driven by a B&K amp through two Rane TF-4 step up transformers. The monitors sit on my desk on each side of my computer monitor about 3ft away from me.
My "preamp" is a Mackie Big Knob- http://www.mackie.com//products/bigk...OOMED_main.jpg
It's a project studio controller, with the ability to switch three different speaker sets on and off. I have my K-1000 on "C", my LSR25P on "B" and my sub on "A". I can also trim the output levels right on the Big Knob. Here are my impressions:
With the monitors pointed directly at me I get smearing of the upper midrange and treble and some "flanging" (comb filtering effect) in the mid-midrange. When I trim the monitors down in level untill the effect dissapears, the additional ambiance is also mostly gone. When I cover up the tweeters (2300Hz crossover poit) and bring the level back up, things improve greatly. The detail is not as smeared and the ambiance gets much bigger and more realistic- but it is on the dark side. My optimal setting is with the monitors pointed outwards so I don't hear any direct signal from them (they are turned maybe 140 degrees apart and their side walls are facing me). In this position the ambiance is balanced and unobtrusive- no weird interactions between the direct signal from the monitors and the K-1000. The low mids and upper bass are just gorgeous and full.
The big bonus is that the transition from the K-1000 to the sub is very smooth, which is not the case when the monitors are off. Without the monitors the sub always seems detached. I have it dialed in to blend smoothly with the monitors, and when I use K-1000 with the sub I always want to increase its level. Now it just sounds right.

I really like the reproduction this way. The big advantage of listening without disturbing others is gone, so the use is limited, but there are still plenty of times when using the ambiance speakers is perfectly OK.
Thanks for the tip! For me this is the final piece of the puzzle how to incorporate my K-1000 in the music production and mixing/mastering situation.
When I tried it before I tended to end up with mixes that were too "thick" in the low mids and had too much ambiance. This setup should change it.
 
Oct 31, 2005 at 7:57 AM Post #25 of 29
Quote:

Originally Posted by Thuneau
The big bonus is that the transition from the K-1000 to the sub is very smooth, which is not the case when the monitors are off. Without the monitors the sub always seems detached. I have it dialed in to blend smoothly with the monitors, and when I use K-1000 with the sub I always want to increase its level. Now it just sounds right.


That describes my experience as well.

As for the integration between monitor and K1000, I think several factors can be a tplay. First is monitor distance, which directly affects the length of the delay. In my case it is about 5-6 ft, or 5 ms. Timbre matching between K1000 and speakers is also important, but it is hard to predict what kind of speakers would match with K1000, but probably not computer speakers. Using tube amps allows the fine tuning timbre by tuberolling, which I took big advantage of. I like K1000 with tubes better anyway.

I have just added two rear channels to system to further enhance ambience. The rear speakers are playing ambience signal derived from Dynaco QD-1 series II. I must say I am pleasantly suprised by what the $20 passive device (off Ebay) can do. I recently got a binaural recording (Pasadena Symphony under Jorge Meister), probably the only commercial binaural CD of a major orchestra.

Without Dynaquad, I still don't get as much ambience as binaural. With Dynaquad (K1000/two front speakers/two surround speakers) I seem to be getting more ambience than binaural! I need to do some more tuning careful comparison here.

I will post more details about incorporating Dynaquad and my binaural experience ASAP. Stay tuned.....
 
Nov 6, 2005 at 7:52 AM Post #26 of 29
Here is my current system configuration:
Transport: Sony DVP-NS900V SACD/DVD player
DAC: Benchmark DAC1
On head: DAC1 XLR out->RCA adapter->Cayin HA-1A->K1000
Front: DAC1 RCA out->Aria MiniPL (basically EL84 version of Sophia Electric Baby) ->a pair of Athena S2/P2 (monitor/active subwoofer)
Rear: DAC1 headphone out->Jolida JD301A->Dynaco QD-1 series II-> a pair of Athena S.5
DAC1 is used in pre-amp mode to adjust the volume of all channels simultaneously.

Cayin HA-1A is a single-ended tube headphone amp using 12AX7, 12AU7 and EL84*2. Aria tube amp is a push-pull, integrated amp using 2C51*2 and EL84*4. Jolida JD301A is a hybrid integrated amp using 12AX7*2 and LM1875 ICs. Dynaco QD-1 series II is the last version of “quadaptor” on the market. It is a passive device that extracts ambience information from two-channel music. The extracted ambience is played through rear speakers pointed toward side walls. The front speakers are playing ~5dB lower than K1000, and rear speakers are ~3dB lower than front speakers.

Let us first examine what happens when front speakers are added.


At a first glance, the system configuration makes no sense. I thought the front speakers would surely contaminate and muddy K1000’s clean output, and integration would be a problem. To my surprise, adding front speakers actually make K1000’s sound clearer, and the integration is seamless, after some tube-rolling. Several people who listened to it all think the front speakers completely disappear behind K1000. This is a good demonstration of Haas (precedence) effect mentioned in post #16. Speaker sound arrives about 5 ms later than K1000 sound, due to the distance, and the brain seems to integrate them automatically. I tend to believe the front speaker outputs simulate early reflections. It is known that early reflections change the perceived quality of the direct sound, adding clarity, warmth and dimensionality—this is exactly what happens. In addition, the attack, sustain and decay of instruments are more natural. The timbre of acoustic instruments is more faithful than any other audio system I have heard—headphone or speakers. People who have heard K1000 probably notice that it sounds kind of hollow for pop music (too much stereo separation). After adding front speakers the sound is more enveloping and fills up all the hollows. Subwoofers also nicely compensate for the slight lack of bass on K1000. Without the front monitors playing, the integration of sub and K1000 is quite poor.

Now add the rear ambience speakers.

I have to say that I am very impressed by Dynaquad, a $20 black box off Ebay. For ordinary stereo speaker listening it nicely brings back some of the missing ambience of stereophonic recordings (discussed in post #23). I think its efficacy lies in the fact that front channel is totally unaltered. There is no limitation about the two front channels, so I can use all-tube amps. It extracts ambience passively, without adding any digital artifact. It is a shame that it is long discontinued.

In the K1000/speakers setup, when rear speakers are playing ambience signal extracted by QD-1, the quality of instruments sounds are little affected. The more obvious improvement is in the spatiality. The sound is more enveloping, and there seems to be a good amount of sound coming from the back and the sides, something always missing in two-channel speaker or headphone setups, except for binaural recordings. As a result, the decay of instruments is more believable and the timbre even more natural. I should emphasize that to get surround effects, one can’t simply send two-channel signal to rear speakers without some surround processing. When front and rear speakers play the same signals, there will be phase cancellation problem. I point rear speakers toward side walls to increase delay and reflections. I also removed bass driver grille but kept the tweeter grille on rear speakers. This attenuates treble, in hope of simulating the attenuation of treble when sound travels further in air. It is well known to mastering engineers that attenuating extreme treble by EQ imparts the feeling of distance.

After one hearing K1000 plus front and rear speakers, going back to K1000 alone is quite tough. The natural ambience is largely gone with K1000 alone, especially in the back. My musician friend offered an analogy: it is as if someone tore out the entire rear half of the concert hall and placed the remaining half against a cliff. Where is the ambience from the back? With front but no rear speakers accompanying K1000, the situation is not as bad--as if there is a huge, absorbent curtain behind the seat. Of course, K1000 alone is by no means poor audio reproduction. At a recent big meet, I did not find any setup better than my Cayin+K1000 for classical music (R10 and Omega II systems are in the same league). Spatiality is actually the strongest merit of K1000 against other headphones. But adding both front and rear speakers just makes it that much better, and not just for classical music. For example, studio pop recordings that sound too dead now get a peasant dose of reverberation.


Does it really sound that natural?

Well, anyone can easily imagine how natural his own system sounds after some tweaking. I therefore decided to put my new K1000 surround system to some grueling tests. How about a side-to-side comparison with a $15,000 violin or a $6000 viola? In the next part I will describe some of the tests I conducted and the satisfactory results I got.
 
Dec 26, 2005 at 12:28 AM Post #27 of 29
To see how good the K1000+speakers system really is, I have decided to put it through a series of challenges.


Challenge 1: against an actual concert experience

This challenge was performed when my system consisted of K1000 and front speakers, but no rear speakers. Emerson quartet came to my school to perform Mozart’s “Dissonance” quartet (K465), a piece I happened to own on a DG recording.

5076109.JPG


It represented a good chance to compare my system to a real concert, and my focus was the sound of the Stradivari violin played by Eugene Drucker, the first violin.

Before the concert, I listened to the quartet on my system, and in the concert I first sat at the first balcony row reserved for free student tickets. I quickly realized that the sound on the balcony is pretty bad. Before I had K1000+speaker system, when I went to live concerts, no matter where I sat, I always felt that live concerts sounded better than my stereo, but not any more. The balcony sound was blurred, overly soft and lacking in details, although the Strad had a sweet tone. In the second half, I sneaked down to an empty seat at the central fourth row, an optimal sonic spot. At the fourth row, the sound is much better, much more intimate, and I rushed home ASAP after the concert to listen to my sytsem.

At home, listening to K1000 plus front speakers, I felt that the Strad sounded fuller at home, and the viola sounded lighter and airier. The cello had less weight and vibrancy than the real thing, though. At home, I could hear more details, but the sound was not totally transparent, probably due to the tubes adding some warmth. At home, instruments sounded more airy but less vibrant. So overall it was a tie between my home system and the fourth row seat, which was nothing short of a shock to me. The balcony seat would be significantly worse than either. I also listened to K1000 without speakers, and the sound degraded in many ways. The sound was too thin, lacking timbre richness as well as the natural attack and decay. Ambience and reverberation was diminished, resulting in a flat, 2D sound and a reduced the feeling of presence.

If I were to purchase a Strad, I would actually prefer the one I heard at home, than the one I heard at concert. Of course it was the same Strad, but it just sounded too thin in concert. In this particular challenge, I felt my system honestly sounded as good as the real thing. How is this possible? Well, the concert was held in an auditorium at our school with suboptimal acoustics. The hall is fan-shaped, which reduces early sidewall reflections to the audience. The ceiling is completely decorated with hanging metal pieces which reflect sound, another recipe for poor sound. When I went to other concerts in our school at a smaller auditorium with excellent acoustics, my home system was no match for its rich and full sound. My friend Andrew and I together felt the smaller auditorium has better acoustics than several major halls we have been to, including Jones Hall (Houston), Dorothy Chandler Pavilion (LA), and CKS Memorial Hall (Taipei).

In conclusion, the K1000 plus front speaker system sounds as good as a good seat at a concert venue with suboptimal acoustics, which is not bad at all.


Challenge 2: vis-à-vis with real instruments

Since I claim that my system sounds really natural and accurate and what not, what better test is there than to play some real instruments right next to it? I have to thank my friend Andrew for bringing his $15,000 violin and $6,000 viola to my apartment. His violin is a Stradivari replica made in the 1920’s by a semi-famous maker. Here is a picture of him playing the violin and listening to "K1000+2 speakers" simultaneously.

K1000&violin.jpg


Do the recorded violin and the real violin sound the same? No. First of all, a violin played in a small apartment room is going to sound very different from a violin played in a much larger concert hall or studio. The actual instrument sound at 3 feet away has total transparency and tons of details. The lower register of the real violin is more vibrant, and the treble is brighter, due to room acoustics and distance. Nevertheless, with the right CD playing, there can be good timbre matching between the recorded and the real violin. By timbre matching I mean the two blend into one and become indistinguishable. For example, in an orchestra, different violins playing simultaneously can sound as a whole instead of individual instruments. This is because the timbres of violins resemble one another, but a viola playing the same note would not blend in. If I turn off the front speakers and leave K1000 playing alone, it can no longer timbre match with the real violin, which indicates how much the speakers contributes to the naturalness of the sound.

When I added rear speakers connected to Dynaquad, my friend brought his viola made in the 1970’s. Although the addition of rear speakers made instruments sound even more natural, the result was basically the same. Recorded viola and real viola didn’t sound the same, but it was possible to get timbre matching.


Challenge 3: against binaural recording

As I explained earlier, binaural recording played on headphones can render an uncanny sense of space. Unfortunately, there are few commercial binaural recordings. I was told the only available commercial binaural recording of a major orchestra is this CD:

B00003Q02G.01._SCMZZZZZZZ_.jpg


It is released by Newport classics (NCAU-10010), with Pasadena symphony playing Strass’s Also Sprach Zarathustra and Saint-Saens’ Symphony #3. I have to say that it is probably not the greatest binaural recording, but the expansive soundstage is nothing like what you can find in a stereophonic recording. For comparison, I have Saint-Saens’ symphony recorded under Ernest Ansrmet (Decca 443-658-2), a typical recording from Decca’s golden age. When I listen to both recordings on K501, the binaural recording sounds much more realistic. The sound is more spacious and enveloping, and the bass is several times stronger. In a binaural recording, instruments sound like there is a lot of acoustic space around them, which is much more realistic. Some would argue that IEM is the correct way to listen to binaural recordings, but since this Newport CD is mastered on HD580, K501 should be just fine. When I play the stereo recording on K1000 plus front and rear speakers, the sense of surround is just as tangible as binaural. Although K1000 has better bass than K501, and I have subwoofers to support K1000, the binaural recording still sounds a lot more bassy on K501. Nevertheless, my K1000+speakers system can recreate the sense of surround comparable to binaural recording from ordinary stereo material. This means I am transforming my stereo CD collection into the binaural program, which is not bad at all.

My friend Andrew is an amateur composer who knows the sound of the orchestra extremely well. He also has an exquisite hearing that is very sensitive to ambience in music playback. In fact he is so sensitive that when he listens to most headphones, including K501, most of the orchestral instruments are behind him. My K1000 impresses him a bit more, because in a violin concerto he would hear the orchestra in the front and the soloist in the back, as if he were at the conductor’s podium. Despite the massive soundstage ability of K1000, he feels there is a lack of ambience from the back, as if the back half of the hall is chopped off and placed on the edge of a cliff. When I turn on the front speakers with K1000, he says the violin moves onto his shoulder and he feels like the soloist. However, the ambience is still missing in the back, as if there is a big curtain behind him. When I activate the rear speakers, forming 6-channel headphone surround, he says the soloist moved behind him again, but the hall ambience is complete and it feels natural. Going from K501 to “K1000+4 speakers,” hall ambience and soundstage goes from totally awkward to very natural for him, which is a huge improvement. Since I know how good K501’s soundstage already is relative to other expensive headphones, I am convinced that “K1000+4 speakers” is really, really special.
 
Jan 12, 2006 at 1:59 AM Post #28 of 29
Ferbose, I applaud your continuing researches. It's stuff like this that makes these forums pop. Shopping is fun, but actually getting the toys out and moving them around and trying this and that is where some learning gets done.

You'll probably be interested to read about the early stereo/binaural work of the British engineer Alan Blumlein. This article makes a passing reference to the IMAX Solido system for 3D movies which uses (or used) a binaural feed to an K1000-like arrangement (using far cheaper headphones, I'm sure) augmented by the output of speakers dotted all around the auditorium.

These two topics might give you something to chew on.
 
Jan 26, 2006 at 5:48 AM Post #29 of 29
Just a little update.

Now I use PreSonus Central Station for master volume control to feed three integrated amplifiers driving K1000, front speakers (with active subs) and rear speakers. PreSonus has a super-clean passive preamp section, and now I run Benchmark's balanecd output into Central Station using Mogami XLR->TRS cables. This sounds better than using Benchmark in preamp mode, as done previously. This also allows me to listen to SACD in six-channel K1000 surround. With good SACDs, the sound is very natural and effortless, and have a bit more breathing room in the treble that CDs cannot match.
 

Users who are viewing this thread

Back
Top