Head-Fi.org › Forums › Equipment Forums › Sound Science › Speaker vs. Headphone Soundstage / Positional cues / Imaging
New Posts  All Forums:Forum Nav:

Speaker vs. Headphone Soundstage / Positional cues / Imaging - Page 3

post #31 of 41
Quote:

Originally Posted by Robbo1802 View Post

 

I am not so sure about how sensitive our ears are to discrete changes to timing.  Yes, I could hear changes in the sound when I moved my speakers, although certainly not down to fractions of an inch.  However, I have mostly considered this more a response to changes in the reverberant field (most of my speakers were dipole or binomial radiators) as I could also perceive change when the angles were changed without any changes in the path distances.

 

Great additions to the discussion, Bob. 

 

In your example I agree it would be difficult to determine whether the heard differences are timing, room reflections, change in treble dispersion or something else.

 

Your telephone ringers is a superb example of how we rely on the subtleties of complex sound for our aural understanding of the world.

 

I feel comfortable that in an acousticly treated studio you would discern minute changes in speaker distance.  Consider, for example, stereo recordings made with a pair of omni mics placed a foot apart.  There is no appreciable frequency or sound pressure difference between the mic signals.  Yet, we will hear a stereo image.  This is due to tiny timing differences.

 

Interestingly, piano is often recorded much, much closer than you describe.  Often stereo mics are placed at the "waist" of a grand and then roughly 2/3 of the distance back from they keyboard, both within a couple of inches away.  The thinking is that the lows and highs are equally captured.  Much closer is also common, placing the mics under the lid and often inches from the strings with the mounts attached to the piano's harp.

 

 

Quote:
Is it possible to get a convincing sound stage from an electronically produced sound source such as a game?  Is there sufficient complex information available to be convinced of what you perceive?

 

Absolutely.  Consider the sound stage of a well engineered pop or rock studio recording.  The vocals and instruments are often recorded on individual tracks in heavily damped rooms.  The perceived soundstage (left to right imaging, room ambiance, depth) is wholly artificial, it never existed in the real world.

post #32 of 41

Hi,


 

Quote:
Originally Posted by Wapiti View Post

Interestingly, piano is often recorded much, much closer than you describe.  Often stereo mics are placed at the "waist" of a grand and then roughly 2/3 of the distance back from they keyboard, both within a couple of inches away.  The thinking is that the lows and highs are equally captured.  Much closer is also common, placing the mics under the lid and often inches from the strings with the mounts attached to the piano's harp.

 

Yes, I was aware of the practice, worse, sometimes I swear that they have just about closed the lid - much more closed than half stick  I remember when I first heard the Keith Jarret album The Köln Concert, I could hardly credit that it was a grand piano, sounded half way to a harpsicord (and we all know how Thomas Beecham described harpsicords) although I think the piano had been voiced to give that effect. 

 

The problem with grand pianos is that they too are dipole radiators.  I have spent quite some time listening to them while lying on the floor directly beneath the soundboard (for reasons I will not go into here) and it is quite clear that you will never get the right sound if you only mike the under lid area. 

 

One of the most astonishing experiences I had with pianos was about 30 years ago.  I had gone to listen to a Keyless red Welte Upright (see the top of this page http://www.pianola.org/reproducing/reproducing_welte.cfm).  Nothing had prepared me for the astonishing impact that piano had, yet, as you can see from the drawing, it is typical upright with most of the sound originating from the back of the soundboard.  I might add that these cabinet players were intended to be used in large, often public, spaces.  Nevertheless, ever since I have had a great deal of respect for the Feurich pianos that formed the basis of the early Welte players.

 

Anyhow, this has gone off topic. 

 

I think it would be very interesting to consider a comparison between Stax Lambdas (say404s) and Sigmas based on the 404 drive units. There is the obvious difference in the Sigmas sound being altered by the pinna but there is the secondary effect that the ears are more in the far field and path differences between the outer and inner areas of the diaphragm will have considerable less effect.  Also, the possible issues of interactions between the high order diaphragm modes and the pinna would be eliminated.  This could help produce a more stable and convincing soundstage. 

 

With the game comment I was more concentrating on whether synthesised sounds could be complex enough to be convincing from a soundstage effect.

 

Now to another bit of trivia, I had a friend who worked for pioneer for a time and he managed to get a Bodysonic chair - now THERE was something that could really rock your world, total immersion, extraordinary involvement in the sound, so involved that it just about wore you out (especially with Acca Dacca).  Okay, the sound wasn't very accurate or even that good but the experience was...eek.gif

 

Regards,

Bob

 

Hmmm, a DIY much better version of the bodysonic - there's a thought,.

 

 

post #33 of 41
Quote:
Originally Posted by Robbo1802 View Post

Yes, I was aware of the practice, worse, sometimes I swear that they have just about closed the lid - much more closed than half stick  I remember when I first heard the Keith Jarret album The Köln Concert, I could hardly credit that it was a grand piano, sounded half way to a harpsicord (and we all know how Thomas Beecham described harpsicords) although I think the piano had been voiced to give that effect.

 


Indeed.  I have the original LP.

 

Even worse is the practice of lacing mics on the harp, closing the lid, and the piling heavy mover's blankets on top. 

 

Quote:

 

The problem with grand pianos is that they too are dipole radiators.  I have spent quite some time listening to them while lying on the floor directly beneath the soundboard (for reasons I will not go into here) and it is quite clear that you will never get the right sound if you only mike the under lid area. 

It is always fun to learn that others engage in such things.  Many of us fascinated by sound can relate.

 

Quote:

 

One of the most astonishing experiences I had with pianos was about 30 years ago.  I had gone to listen to a Keyless red Welte Upright (see the top of this page http://www.pianola.org/reproducing/reproducing_welte.cfm).  Nothing had prepared me for the astonishing impact that piano had, yet, as you can see from the drawing, it is typical upright with most of the sound originating from the back of the soundboard.  I might add that these cabinet players were intended to be used in large, often public, spaces.  Nevertheless, ever since I have had a great deal of respect for the Feurich pianos that formed the basis of the early Welte players.

Very cool!  I would love to hear one.

 

 

post #34 of 41

Human perception of a sound's location has a lot to do with sensory integration of visual cues as our interpretation of the world is visually dominated. This is why when you watch the news on TV, without thinking about it you believe the sound to be coming from the reporter's mouth when really it is coming from your speakers.

If you can find a video on YouTube of a concert filmed from the front perspective without any cutting to close ups and different angles, you may find that with your eyes closed, the audio sounds like typical headphone soundstage, but with them open, you'll perceive a greater sense of depth.

post #35 of 41

Very true.

 

It fascinates me how easy it is to separate the sound of a flute playing in the midst of a full orchestra tutti forte simply by looking at the flute.

post #36 of 41


Hi,

Quote:
Originally Posted by thrillhaus View Post

Human perception of a sound's location has a lot to do with sensory integration of visual cues as our interpretation of the world is visually dominated. This is why when you watch the news on TV, without thinking about it you believe the sound to be coming from the reporter's mouth when really it is coming from your speakers.If you can find a video on YouTube of a concert filmed from the front perspective without any cutting to close ups and different angles, you may find that with your eyes closed, the audio sounds like typical headphone soundstage, but with them open, you'll perceive a greater sense of depth.


Perceptional coherence or perceptional cross-correlation - whole books have been written on that one.  The interesting thing is how easily and quickly the visual cues create a percepton much more focused than the nebulous 'guessed' perception based on experience.  A secondary effect is that the person usually responds to the whole experience much better - perhaps removing the need to constantly 'guess' what is going on in the soundscape leads to a more relaxed and involved experience. <shrug>

 

Of course psychoacoustics throws up much more disturbing conundrums regarding perception including the relatively well documented effects of lighting and mood on perceived sounds.  Preconceived perceptions are far more difficult to nail down, perhaps the most notorious for audiophiles is expressed by Berenaks Law, specifically:

 

 

"It has been remarked that if one selects his own components, builds his own enclosure, and is convinced that he has made a wise choice of design, then his own loudspeaker sounds better to him than does anyone else's loudspeaker. In this case, the frequency response of the loudspeaker seems to play only a minor part in forming a person's opinion."

—L. L. Beranek, Acoustics (McGraw-Hill, New York, 1954), p.208.

 

 

wink.gif

Regards,

Bob

 

post #37 of 41


 

Quote:
Originally Posted by Robbo1802 View Post         A secondary effect is that the person usually responds to the whole experience much better - perhaps removing the need to constantly 'guess' what is going on in the soundscape leads to a more relaxed and involved experience.

 

Which may partially explain why attending a live concert is so satisfying.

 

Nice post, Bob.

post #38 of 41
Quote:
Originally Posted by arnaud View Post
(...)

 

Practically speaking, the only route is probably to offer multiple banks of HRTFs, maybe some research is done to identify population trends. Already, in the other thread there were mentions of neat decomposition techniques to extrapolate HRTFs between azimuth / elevations (because you can't possibly measure them all). Similarly, with if some gross dimensions of the ear lobe / head, you could come up with some improved (yet generic) HRTFs, I'd see this thing going somewhere...

 

This is already being done: 

 

 

Quote:

(...)

Smyth Research is a small company based in Bangor, Northern Ireland. Their principles, Drs Stephen and Mike Smyth are scientists with a very impressive track record in audio, being responsible for both the compression technology that allowed broadcasters to transmit stereo audio down standard telephone lines using analog modems (estimated to be in use by around 20,000 broadcasters worldwide, and the compression algorithms behind DTS.

They have come up with a new variant of HRTF that is personalised to the individual listener. The system captures a real listening environment such as a studio control room and delivers hyper accurate virtualisation of this environment including the room itself, the speakers and the head response of the individual listener.

Small omni directional microphones are worn by the listener, as if they were earbud headphones, during the system setup phase and test signals are played through the rooms speaker system. The results are stored in a memeory card that can be moved between systems.

Within the scope of this project, I am working with Smyth research and Sonalksis to further develop their system to allow the room/speaker response to be seperated from the individual head response. If we can achieve this we can do two improtant things:

1. We can use the York Virtual Acoustics system to create virtual rooms and virtual speaker arrays with as many speakers as we want to maximise the spacialisation accuracy of the system with respect to full 3D sound

2. We can measure a large number of subjects and analyse their individual HRTF data to build a library of virtual ears in the hope that we can then provide an end-user system where the listener can scan through a menu of ear shapes to find the one that kost closely matches their own ear response. If we can achieve this, it can be embodied in a player plug in or App for deives like the iPhone and iPod, thereby providing a route to market for content produce in 3D sound.

http://www.edmonds.org.uk/masite/practical7.htm

 


 

 

post #39 of 41

Jgazal: neat, you always dig up stuff ;). I don't know if this can lead to anything better than a good dummy head HRTF, but it's worth investigating! For instance, I (may have) read that minute differences betwen Left and Right ears are used for the localization. If true, then it may be hard to get great results with some parametric HRTFs.

 

In any case, I can imagine that big game developers must be actively funding research work on this no?

 

post #40 of 41

 

I can imagine both 3D cinema and game developers would benefit from such approach.
I for one would like to see Smyth Research algorithm via software, but then you lose total idiosyncrasy (acquiring HRTF in a certain room) and head-tracking (might help with localization also, as well as visual cues). Their algorithm would also be more vulnerable businesswise.
I wonder how many "ear-models" a user would have to hear until he/she finds the closest one… Maybe it is not feasible. It reminds me the level of idiosyncrasy that we find in fingerprints or iris identification… 
post #41 of 41

Indeed, although individualized ear characteristics are probably much more critical to localizing than fingerprint is to touch sensory...

New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Sound Science
Head-Fi.org › Forums › Equipment Forums › Sound Science › Speaker vs. Headphone Soundstage / Positional cues / Imaging