Angled Drivers in Closed vs Open headphones and effects on Soundstage?

Discussion in 'Sound Science' started by WyldeGooseChase, Oct 25, 2017.

  1. 71 dB
    You can't have spatial distortion on speakers. Acoustic cross-feed always takes care of that. Spatial distortion manifests itself on headphones without cross-feed.

    Yes. It is very naivistic take on stereophonic sound trying to exaggerate the effect of two audio channels.

    On real ping pong stereo nothing occupies center. Half of the instruments are on the left and the other half on the right. On semi ping pong recordings some off the stuff is mixed center as mono.

    Yes, even mono sound can have depth if it contains proper spatial cues. Mono sound works well on headphones and gives steady centered sound image. I want to listen to people talking in Youtube for example (say on an unboxing video) mono as it should be. Often it isn't. You have weird phase shifts and what not. Very unstable sound. Well, I have mono switch on my headphone adapter. Youtube content creators are better with picture. Even the better ones make amateurish mistakes with the sound. So, sometimes it's beneficial to make the sound monophonic for headphones.

    Yes, and loudspeakers have been the way to listen to music outside live music. That has changed over the last decades and headphones have become very popular.

    What I have is much better than "just a boom box."

    I think demonstrations fail in having only one sound going around your head. You need a world of sounds to "anchor" things. Hearing compares colourizations to figure out spatial aspects. If you have only one sound, how can you compare it? Sounds coming behind you have more filtered treble, because pinna blocks high frequencies whereas for sounds coming in front of you pinna amplifies sounds. However, the filtering effects are very complex and really hard to get right. Everyone has own HRTF, so my opinion is to not try too hard on this. Make things sound pleasant and natural rather than ultrareal.

    It isn't a trick for me to watch miniature people on tv screen. These are the compromizes we have. I rather take it than not listening to headphones at all. I love headphone listening thanks to cross-feed.

    I have to admit I have never thought scale having relevance. On my tv screen Harrison Ford is only one feet high, but I can still enjoy his performance. When I go to movies, the actors are giants on silver screen. Again, scale doesn't bother me. Why would it matter on sound, when I can control loudness level? The band might be miniature, but they play loud!

    It enhances crappy ping pong recordings, but messes with the fidelity on well recorded music. I don't want the acoustics of my small living room to be convoluted to the acoustics of a church on a great recording of organ music. With headphones I avoid that and I can blast out the music without bothering other people with it, even in the middle of the night.

    It's not that simple. It depends on the recording. Rock music is produced very differently from classical music. Techno is so different from jazz. You have your preferences. I have mine. I also think multichannel is great (when done well).

    I think headphones are more capable than that.
     
    Whitigir and ev13wt like this.
  2. bigshot
    Sorry! I'm not going to read all your reply if you tear things up into partial sentences to reply to. If you have a point to make and can make it clearly in a structured paragraph, I'm happy to take the time to read and reply, but I don't have time to sort through scraps of thoughts. I'll answer a couple of your replies that caught my eye though .

    Yes, headphones have spatial distortion... because recorded music is designed to be listened to on speakers. Stereo soundstage involves not only placement of sound to the left, right and center, but also the distance of the sound from the listener. That is part and parcel of the way it is supposed to be presented. If you must use headphones, you can synthesize an element of that using cross feed, but that it pretty primitive. I've been told that the Smyth Realizer does a much better job. But even that is a synthesized ambience and assuming it is capable of perfectly reproducing a natural acoustic, it still doesn't reproduce the kinesthetic feel of the bass you get in a good speaker system. Synthetic reality takes tweaking to get past the artificiality. The nice thing about room acoustics is that they are 100% real and natural from the get go. If your system and room acoustics are good, you can close your eyes and hear the band in front of you as if they were in the room with you. That's the ultimate goal of sound reproduction. If you can't get all the way there, using synthesized ambiences is better than nothing. I see the value of synthesized ambiences however. If your speaker system and room acoustics are good, you can use synthesized ambiences in the form of DSPs to take the sound to the next level beyond just stereo soundstage. But that's a different subject.

    More on visualizing the band in front of you...

    Scale in film making could possibly be compared to scale in a Pink Floyd album. A film will cut from a long shot to a closeup repeatedly. Sometimes a vocal on Dark Side of the Moon sounds like it's half a block away, sometimes it's whispering in your ear. Creating a natural scale for your soundstage doesn't matter as much for this sort of music. But with live concerts, jazz, classical and opera a specific real acoustic of a concert hall is being presented. When I watch an opera on blu-ray, I'm seeing it on a ten foot screen from about 14 feet away. In a wide shot of the entire stage, the visual perspective perfectly matches the scale of how I would see the stage from the best seat in the hall. The soundstage presented by my speaker system fills the entire screen from top to bottom with a little bit extra to the right and left. It' perfectly matches the scale of the image. In a shot of a full stage, a character can enter from the right and cross to the left, and my 5.1 system will track the voice perfectly, matching it to the exact position of the character on the screen. It's the same with concert videos, orchestral performances, and small jazz combos on video. Most live music is mixed to match the wide shot of the stage. Generally, if the camera cuts to a closeup on a singer or soloist, the perspective of the audio stays wide so the sound stage isn't popping all around with the cuts. This anchors the reality of the sound stage, because it's easier to get disoriented by shifting sound perspective than it is by rapidly changing visual perspective. It's important to note that regardless of what is on the screen, when I close my eyes, I can clearly visualize the aural perspective as if I was sitting in the best seat with the stage in natural scale in front of me. It's an added level of visceral immersive reality, and achieving that isn't possible with a small TV with headphones. You can certainly get a lot of enjoyment from to music that way, but it's not going to have the same sort of natural presence as with a large screen that matches the natural scale of a live concert. So going back to the analogy to Pink Floyd... since there is no natural anchored perspective being presented, scale and soundstage are much less important. It's good to have an anchored soundstage just as a baseline I suppose, but it isn't mandatory. That's why they call Pink Floyd "headphone music".

    Hope this clears some things up for you.
     
    Last edited: Oct 27, 2017
  3. 71 dB
    Sorry. My point was simply, that the soundstage you get with headphones isn't inside your head (unless the recording is bad and/or not cross-fed) nor is it what you get with lousspeakers. It is something in between. To me it is kind of a cloud around me. Cross-feed makes the sounds in this clound more tangible, one of the benefits of cross-feed. Proper cross-feed also shapes the cloud more round, as it is merely lef-right without it. That's how you get a sensation of depth. Good recordings give support for more depth. The more there is natural "makes sense" spatial information in the recording, the better soundstage you can get. With old King Crimson there is no hope, all I can do is remove spatial distortion with cross-feed and enjoy the music. With well recorded organ music By Nikolaus Bruhns on a multichannel SACD downmixed to Lt/Rt matrixed stereo cross-fed the result is pretty stunning. The thing is I don't care the difference. It means I have two very different experiencies to choose from.

    Thanks! Interesting theories about scale. I know now that you are happy with the performance of your AV-system. That's good.
     
  4. WoodyLuvr
    @71 dB Very curious. Any link to your "passive DIY headphone adapters with cross-feed" design? Respects.
     
  5. 71 dB
    I have build three headphone adapters (plus many other kind of crossfeeders). The first one was in 2012 when I discovered crossfeed. It was a simple Linkwitz-Cmoy crossfeeder with only one crossfeed level (-8.4 dB). I had just on-off switch. Then I modified it to have two more crossfeed levels (-1.1 dB and -6.0 dB) by adding one switch and a few resistors. The beauty of Linkwitz-Cmoy design is it's flexibility on impedance levels so that it can be made suitable for low impedance levels between speaker terminals and headphones as well as in line level situations. That flexibility also means it's easy to have many crossfeed levels. Crossfed signal is always the same and the level of direct signal is varied. Linkwitz-Cmoy has got treble boost section, and the amount of treble boost varies very nicely with the crossfeed level.

    The first headphone adapter took me to the whole new world of crossfed headphone listening. It taught me a lot. So, I decided to design and build another one based on what I had learned. Headphone adapter number 2 has six crossfeed levels from -9.6 dB to -1.1 dB plus other features such as mono, blurred mono (almost mono), and treble crossfeed to soften treble on harsh recordings. I can make even the craziest ping pong stereo recordings sound ok with this headphone adapter. I gave my first headphone adapter away to a friend.

    The need of optimal crossfeed level to get the best result is a "burden" with a crossfeeder. I started to re-think Linkwitz-Cmoy. Linkwitz designed his crossfeeder back in the 70's to simulate loudspeaker listening, the typical listening angle of plus minus 30 degrees. That gives you the ~250 µs delay between ipsilateral and contralateral sounds and also the 700-800 Hz cut-off frequency. The maximum ITD is about 640 µs and I designed a "wide" crossfeeder working at that philosophy. The cut-off frequency is about 300 Hz. This crossfeeder produces headphone-like wide sound, but without spatial distortion. It works surprisingly well at a constant -3 dB crossfeed level, which corresponds HRTF for sounds coming from left or right at 90° angle. The sound lacks "depth" compared to normal crossfeed, but the sound is somehow very natural and almost anything just works with it. I use both of these crossfeeders depending on my mood and situation. The "wide" crossfeeder is easy. You can just forget it even exists and just enjoy spatial distortion free music on headphones. The six level Linkwitz-Cmoy on the other hand is to tweak the sound, althou one learn quickly to find the optimal level of crossfeed.

    https://www.head-fi.org/threads/to-...t-is-the-question.518925/page-8#post-13740318
     
    WoodyLuvr likes this.
  6. PETEBULL
    I think angled drivers attribute to imaging rather than soundstage. Soundstage depends more on treble quantity.
     
  7. 71 dB
    Treble quantity? Treble definitely has it's effect on perceived soundstage, but it's much more complex than just quantity. Above about 1600 Hz our hearing is not based on ITD anymore (the wavelengths are too short for that), but it is based on ILD and the way pinna shapes the sound according to the direction of sound. Level difference is more important than absolute level. The shape of the spectrum has it's effect on soundstage. For example, having a dip in 2-6 kHz range gives an impression of more distant sound.
     
    ev13wt likes this.
  8. Whitigir
    @71 dB thanks for the beautiful logic and science explainations about soundstage
     
  9. jgazal
    I have found the following logical and scientific explanations more beautiful:

     
    Last edited: Nov 22, 2017 at 5:16 AM
  10. ev13wt
    So both have something of a depth and width perception recreation possibility.

    With both I can enjoy music.

    - a lot of modern music is mixed for headphones.
     
  11. bigshot
    Modern pop music is probably mixed for earbuds.
     

Share This Page