Testing audiophile claims and myths
Dec 19, 2016 at 9:44 AM Post #6,541 of 17,336
Formats with more "extensive immersion" and "wider soundstage"....
Absolutely.... been there... done that.
 
Surround sound made an appearance once as "quadrophonic" - which was sort of popular for a while and then flopped.
Then it came back as "surround sound" - with 5.1 and 7.1 - which seem to have stuck.
Then it came back again as Atmos.....
 
In fact, if you like that sort of thing, and find positioning in the sound stage, and the apparent height or size of each instrument important, then Atmos COULD BE a huge step forward. Atmos is OBJECT ORIENTED. This means that, at least in principle, rather than simply position each instrument in the sound stage by controlling how loud it is in the mix, you can actually specify a location for each. In the Atmos mastering mixer, you can literally point to each track/instrument, position it in 3D space on a virtual 3D screen, and then set the size it should occupy. (It actually shows a 3D representation of a room and, for each mixed "object", you get to position a red blob in 3-space and dial up a size for it.)
 
The decoder then reads that information, looks at the speakers you have in your particular system, and controls how much signal is going to each speaker to position the instrument correctly in the mix. In principle, this should be able to ensure that the sound stage is correct even if your speakers aren't in standard locations. At the very leas it can position each instrument separately in space.... or position a section, like the string section, in one area, and then single out the position of a soloist... and even move them around. The fact that its ability to do this includes information about the vertical position of each entity is really just an added detail. 
 
It is interesting that Atmos is being heavily promoted for home theater, but nobody is even talking about "Atmos encoded music discs". Obviously, just as there are audio-only Blu-Ray discs in Dolby TrueHD or the DTS equivalent, there COULD be Blu-Ray discs with just music - recorded in Dolby Atmos or DTS-X. However, for whatever reasons, the home theater and audio markets seem to have split quite completely.
 
As for your final comment..... some audiophiles seem to prefer to be "sitting in the middle of the orchestra", or "front row center".... rather than a few rows back with the orchestra clearly in front of them. (Personally, I prefer neither to be sitting in the center of the orchestra, nor in the front row of a movie theater, looking up at the screen.)
 
Quote:
   
5.1 has been the film standard for about 20 years and in effect provides a 360deg soundstage, how much bigger than that do you want it to be? Film is now moving on to the vertical plane as well, rather than just 360deg in the horizontal plane, with systems such as Dolby Atmos. But the fact remains that despite it's availability, even the 360deg of 5.1 has not taken off.
 
When the change from mono to stereo occurred, the music industry evolved to take advantage of this new format, musicians and producers changed what they were doing and new genres evolved which relied on stereo. Despite various experimental albums over the last decade or so, the same thing hasn't happened with 5.1 though. No musicians or music producers have developed a genre, style or found any other way to take advantage of the 360deg soundstage which has engaged consumers. So far as music is concerned, consumers want standard stereo and nothing more.
 
We also have to consider that the soundstage with headphones is already far bigger/wider than the music is designed to be. Stereo speakers give us effectively ~90deg width, headphones artificially increases that to effectively 180deg. So if you're talking about headphone use, the soundstage is already extreme and you're asking for it to be even bigger, while at the same time asking for it not to be "used to an extreme", which is a contradiction that makes no sense to me. To be honest, I don't really get this thing which some audiophiles seem to have for an unnaturally wide stereo imagine or how they can equate it with higher, rather than lower, SQ.
 
G

 
Dec 19, 2016 at 10:34 AM Post #6,542 of 17,336
My problem is my inability to describe what it is I mean. I prefer it when I hear the music as if it coming from around me and not inside my head. I am not making that makes SQ better as a matter of fact. It is just something I like. A format which allows some adjustment to suit personal taste would be great.
 
Another way to try and describe what I mean is from the 360 degree videos that are appearing. So if you are say watching a motorcycle, you can chose to look forwards, to either side or behind.
 
Dec 19, 2016 at 12:42 PM Post #6,543 of 17,336
You're talking about what most people call something like "synthesizing spatial content" or something like that.
There are (and have been) several attempts at doing exactly that.... like SRS and "Dolby Headphone".
If you look, I think you'll find several plugins for various music players that do this sort of thing.
You might also check out something called Ambisonics (which is a synthesizer for "simulating binaural content on stereo speakers").
 
(I seem to recall one that used a "wrapper" to use the commercial Dolby Headphone DLL inside FooBar2000 to give you the ability to play surround sound content through stereo headphones, and adjust both the levels and the apparent positions in space of the various channels.)
 
The problem seems to be simply that, while many of them produce pleasant effects that some people like, none of them has succeeded in becoming widely accepted.
Therefore, without wide acceptance and some sort of "standard", none of them individually lasts very long.
 
Quote:
  I am not saying it is not used, or asking it be used to an extreme, just that I think more use could be made of it so the soundstage is bigger than it is now.
 
Or, somehow, the user can alter it to suit, like bass and treble can be altered.

 
Dec 19, 2016 at 1:40 PM Post #6,544 of 17,336
@KeithEmo When I've listened to many of these spatial effects tricks and have often felt that they alter the timbre of musical instruments, which to me is a non-starter.
 
Dec 19, 2016 at 2:49 PM Post #6,545 of 17,336
Y'all just need a wave field synthesis rig:

 
 
Hopefully enough proliferation of VR headsets will help standardize audio virtualization formats. There was recently a standard for 3D audio formats that came out (I'd have to search for link), so ostensibly there has already been some progress, but a standard isn't anything if nobody uses it. Reading up on virtualization I actually see a lot of stuff in the Ambisonics formats, but for some reason we keep getting new surround formats for movies :shrug.jpg: ($$$$$) My hope is that headphone virtualization technologies will get a boost from the VR world, because headphones seem to go hand-in-glove (ear-in-can?) with wearing a Virtual Boy. We need someone coming up with a svelt way to account for our ears, because people seem averse to sticking mics in their soundholes.
 
Dec 19, 2016 at 3:11 PM Post #6,546 of 17,336
Dec 19, 2016 at 3:35 PM Post #6,548 of 17,336
I agree.....
 
To be honest, I tend to prefer speakers over headphones.
However, when I use headphones, I'm used to their not delivering a "normal" sound stage, and it doesn't bother me that much.
 
The closest thing I've heard to "real speaker sound through headphones" is probably the SPL Phonitor headphone amp. It includes several settings for simulating "real speakers in a real room" on headphones - and, to me, they sound quite natural. (Note that it only does stereo - it doesn't do anything to enhance the sound stage past "natural speaker sound". Unfortunately, as a headphone amp, it is quite expensive.)
 
In exact contrast, the old Carver Sonic Holography seemed to produce an interesting "3D effect" with speakers, although I would NOT classify it as "natural sounding". (I never heard it with headphones). That feature was available in their separate C9 box, and in several of their preamps. (They still turn up pretty cheap on eBay if you wanted to check it out.)
 
The reason most of the effects alter the timbre is that they use phase shift and phase relationships both to select which sounds go where, and to simulate sounds coming from different directions. And, whenever you deal with phase cancellations, thus cancelling out sounds at certain frequencies, you tend to alter the overall tonal balance as well.
 
I know some people who quite like PLIIx - although I personally don't find it very satisfying. Note that, if you want to play with different options, there are various ways of using surround sound decoders to synthesize and enhance multi-channel surround sound.... which can then be mixed back into two channels for headphone listening. There are a lot of options like this which can be implemented in Foobar2000 via various plugins (for example, you can run your two-channel source through a PLIIx decoder, or through the Dolby Headphone DLL, then through a mixer to adjust the relative levels, and then convert it back to two channels, and then play the result in stereo through your headphones). I don't recall the specifics, but I've seen several detailed - and somewhat complicated - descriptions about how to do this.
 
Quote:
  @KeithEmo When I've listened to many of these spatial effects tricks and have often felt that they alter the timbre of musical instruments, which to me is a non-starter.

 
Dec 19, 2016 at 3:39 PM Post #6,549 of 17,336
  Y'all just need a wave field synthesis rig:

 
 
Hopefully enough proliferation of VR headsets will help standardize audio virtualization formats. There was recently a standard for 3D audio formats that came out (I'd have to search for link), so ostensibly there has already been some progress, but a standard isn't anything if nobody uses it. Reading up on virtualization I actually see a lot of stuff in the Ambisonics formats, but for some reason we keep getting new surround formats for movies :shrug.jpg: ($$$$$) My hope is that headphone virtualization technologies will get a boost from the VR world, because headphones seem to go hand-in-glove (ear-in-can?) with wearing a Virtual Boy. We need someone coming up with a svelt way to account for our ears, because people seem averse to sticking mics in their soundholes.


Don't know where the above came from, but wavefield synthesis is an old concept going back at least to the 1930s Bell Labs research.  Front of the hall the idea is put up a row of microphones (Bell used as many as 128), then have speakers at each mic position for playback.  In a perfect world you recreate the whole room's wavefield with proper delays and such with no processing.  Bell concluded you could get some of the effect with as few as 3 speakers across the front.  The above extends that idea to all sides of the room.  Various approaches have been developed upon the idea.  Some allowing fewer speakers etc. etc.  None have become standard.  As usual practice and theory are not quite the same. 
 
Dec 19, 2016 at 4:00 PM Post #6,550 of 17,336
 
Don't know where the above came from, but wavefield synthesis is an old concept going back at least to the 1930s Bell Labs research.  Front of the hall the idea is put up a row of microphones (Bell used as many as 128), then have speakers at each mic position for playback.  In a perfect world you recreate the whole room's wavefield with proper delays and such with no processing.  Bell concluded you could get some of the effect with as few as 3 speakers across the front.  The above extends that idea to all sides of the room.  Various approaches have been developed upon the idea.  Some allowing fewer speakers etc. etc.  None have become standard.  As usual practice and theory are not quite the same. 


I've always associated wave-field with the row-of-mics technique and Ambisonics with the minimal-speaker setup. They seem to differ in how they simplify the underlying wave equations, but they aim to do the same thing. Smart people have been working on this stuff for a while, yet I still stream GoT in stereo :frowning2:
 
Dec 20, 2016 at 2:00 AM Post #6,551 of 17,336
 
Don't know where the above came from, but wavefield synthesis is an old concept going back at least to the 1930s Bell Labs research.  Front of the hall the idea is put up a row of microphones (Bell used as many as 128), then have speakers at each mic position for playback.  In a perfect world you recreate the whole room's wavefield with proper delays and such with no processing.  Bell concluded you could get some of the effect with as few as 3 speakers across the front.  The above extends that idea to all sides of the room.  Various approaches have been developed upon the idea.  Some allowing fewer speakers etc. etc.  None have become standard.  As usual practice and theory are not quite the same. 

 
Man, I would love to hear such a thing...
 
Dec 20, 2016 at 6:15 AM Post #6,552 of 17,336
I've seen that most VR skull crushers are associated with the dolby atmos thing(logic as they need vertical cues). if gamers carry the tech on their weak shoulders and necks for a few more years, maybe it could come out as a solid standard? I don't believe it will, but I hope for a standard, any standard really. so we can stop with the scorched earth marketing strategy to render everything obsolete before it's even born.
 
Dec 20, 2016 at 8:53 AM Post #6,553 of 17,336
  I've seen that most VR skull crushers are associated with the dolby atmos thing(logic as they need vertical cues). if gamers carry the tech on their weak shoulders and necks for a few more years, maybe it could come out as a solid standard? I don't believe it will, but I hope for a standard, any standard really. so we can stop with the scorched earth marketing strategy to render everything obsolete before it's even born.

Man, you're bad for business. These guys want to enjoy picking the pockets of unsuspecting consumers for as long as they can get away with it.
 
Dec 20, 2016 at 10:29 AM Post #6,554 of 17,336
[1] It is interesting that Atmos is being heavily promoted for home theater, but nobody is even talking about "Atmos encoded music discs".
[2] ... some audiophiles seem to prefer to be "sitting in the middle of the orchestra", or "front row center".... rather than a few rows back with the orchestra clearly in front of them.

 
1. As I mentioned, no one has really come up with a musical genre/style which even takes full advantage of/relies on 5.1 yet. So a format which extends the capabilities of 5.1 even further is even more superfluous. No doubt we'll see the odd experimental album/track in Atoms at some stage but I can't see it becoming any sort of standard for music. The Object Oriented nature of Dolby Atmos is great for film, where we have a lot of moving sound sources (most of which are established/supported visually) but that's not the case with music, where all the sound sources are expected to be stationary and obviously with a music recording there are no visuals to support any illusion we may wish to create contrary to this stationary expectation. The fundamental problem though is economics. It costs more to build a good 5.1 mixing environment (and more still for a Dolby Atmos mix room/stage) and it takes more time to record/create the additional music/sound to put in those additional channels, considerably more time because with music there are no conventions to inform the arrangement/mixing. So, that's a lot more time, at a higher cost per hour, with a greater risk of failure and all during a time of decreasing revenues from music sales.
 
2. I agree that's what "some audiophiles seem to prefer", I'm not disputing they have that preference, what I'm saying is that what they appear to prefer is a fallacy which doesn't exist. Orchestral recordings are designed from the perspective of some distance from the orchestra. Actually sitting in the middle of an orchestra sounds completely different to just massively widening the stereo image of a distant orchestra. Just as recording a car in stereo from some distance and then massively widening that stereo image does not result in playback which sounds anything like actually sitting in that car. Maybe those audiophiles just have no idea what sitting in the middle of an orchestra sounds like or maybe they just don't care because they're into the sound of their equipment rather than the music. Either way, it makes a bit of a nonsense of their demands for higher SQ.
 
  I am not making [saying] that makes SQ better as a matter of fact. It is just something I like. A format which allows some adjustment to suit personal taste would be great.

 
There's one of the big problems with audiophilia. What some of the more extreme audiophiles "like" doesn't necessarily have any direct correlation with SQ, however, because they are typically unable to make any distinction between what they like and SQ, we end up with all kinds of ridiculous claims and then ludicrous explanations to justify/rationalise those claims. Fortunately, you now seem to be making that distinction but unfortunately it doesn't really matter because what you want/would like: 1. Isn't really possible and even if it were, 2. There isn't enough of a demand for it.
 
  @KeithEmo When I've listened to many of these spatial effects tricks and have often felt that they alter the timbre of musical instruments, which to me is a non-starter.

 
I've never really understood what audiophiles mean by "soundstage", I'm presuming a combination of what we in the pro audio world call stereo image and depth/presence or audio perspective? To create depth/presence actually requires a change in frequency response and therefore the opposite of what you're saying is actually true. What you're really saying is that you want the FR of the instruments to change in line with how the brain expects the FR to change with distance/position, so that the change appears natural and therefore the brain's illusion of timbre is maintained. Unfortunately that's not really possible, it's like trying to change the ingredients in a cake after it's already been baked. With stereo we've effectively got two elements rather than just a single whole cake, which presents opportunities to unpick and rearrange the mix but, we can only unpick it to a limited extent and even what is unpicked can only be rearranged to a limited extent. Mostly this is accomplished by changing phase relationships, which as KeithEmo stated produces fairly unpredictable FR interactions rather than the FR interactions and other transfer functions actually appropriate to the instruments' new spatial positions. The results are surprisingly good, on a superficial level but fall apart on closer inspection. What's interesting about the "Indoor" tool I posted a video to (post #6525) is that it heralds a new generation of pro audio tools which automatically takes care of all the FR, phase, early reflection and reverb interactions (within a 360deg space) from different audio perspectives. In other words, it is now possible to automatically create a convincing transfer function appropriate to different/new spatial positions. However, it requires individually processing each element of the mix and therefore only solves half of this particular problem, as we can't yet un-mix a stereo mix and get at all those individual elements.
 
G
 
Dec 20, 2016 at 11:02 AM Post #6,555 of 17,336
I think you've got it pegged.....
 
Many audiophiles are quite convinced that "sound stage" is a "thing" - separate from things like phase and frequency. They think of each entity in the original experience as a separate physical object in the recording, so they talk about things like "an instrument sounding too big" or "the sound stage being spread too wide" - as if a speaker can actually position a specific instrument in the wrong place. (And, as you noted, the reality would be much more complex. Even with a system like Atmos, if you wanted to treat "the drum" as an "object", you would still have to first combine all the separate microphones used to record the drum set into a cohesive "drum as entity" - before you could even consider "positioning the drum or moving it around". And this would be a massively complicated undertaking, with very limited benefits.)
 
This misunderstanding seems to be why many people say things like that "this or that piece of equipment delivers a detailed and exact sound stage - with lots of depth" - when the reality is more like "it produces some interesting phase and frequency response anomalies that seem to simulate what I imagine a real performance sounds like quite well". They have an audible image in their head of what they expect a live performance to sound like, and then look for equipment that delivers something that is close to that expectation, with no actual understanding of what's involved. (It's kind of like someone critiquing how the tint of the sunlight, and the falloff of the shadows, aren't quite perfectly rendered on their monitor in a particular movie scene - when we happen to know that it was recorded in an indoor studio, at midnight, and the sunlight was indoor spotlights, and half of the shadows were added later with CGI.)
blink.gif

 
To me, the problem is that they're conflating ACCURATE REPRODUCTION with reproduction that simply produces a pleasant result similar to what they expect.
(They decide what they expect, then rate playback equipment on how well it meets those expectations - and reality often has very little to do with it.)
 
Quote:
   
1. As I mentioned, no one has really come up with a musical genre/style which even takes full advantage of/relies on 5.1 yet. So a format which extends the capabilities of 5.1 even further is even more superfluous. No doubt we'll see the odd experimental album/track in Atoms at some stage but I can't see it becoming any sort of standard for music. The Object Oriented nature of Dolby Atmos is great for film, where we have a lot of moving sound sources (most of which are established/supported visually) but that's not the case with music, where all the sound sources are expected to be stationary and obviously with a music recording there are no visuals to support any illusion we may wish to create contrary to this stationary expectation. The fundamental problem though is economics. It costs more to build a good 5.1 mixing environment (and more still for a Dolby Atmos mix room/stage) and it takes more time to record/create the additional music/sound to put in those additional channels, considerably more time because with music there are no conventions to inform the arrangement/mixing. So, that's a lot more time, at a higher cost per hour, with a greater risk of failure and all during a time of decreasing revenues from music sales.
 
2. I agree that's what "some audiophiles seem to prefer", I'm not disputing they have that preference, what I'm saying is that what they appear to prefer is a fallacy which doesn't exist. Orchestral recordings are designed from the perspective of some distance from the orchestra. Actually sitting in the middle of an orchestra sounds completely different to just massively widening the stereo image of a distant orchestra. Just as recording a car in stereo from some distance and then massively widening that stereo image does not result in playback which sounds anything like actually sitting in that car. Maybe those audiophiles just have no idea what sitting in the middle of an orchestra sounds like or maybe they just don't care because they're into the sound of their equipment rather than the music. Either way, it makes a bit of a nonsense of their demands for higher SQ.
 
 
There's one of the big problems with audiophilia. What some of the more extreme audiophiles "like" doesn't necessarily have any direct correlation with SQ, however, because they are typically unable to make any distinction between what they like and SQ, we end up with all kinds of ridiculous claims and then ludicrous explanations to justify/rationalise those claims. Fortunately, you now seem to be making that distinction but unfortunately it doesn't really matter because what you want/would like: 1. Isn't really possible and even if it were, 2. There isn't enough of a demand for it.
 
 
I've never really understood what audiophiles mean by "soundstage", I'm presuming a combination of what we in the pro audio world call stereo image and depth/presence or audio perspective? To create depth/presence actually requires a change in frequency response and therefore the opposite of what you're saying is actually true. What you're really saying is that you want the FR of the instruments to change in line with how the brain expects the FR to change with distance/position, so that the change appears natural and therefore the brain's illusion of timbre is maintained. Unfortunately that's not really possible, it's like trying to change the ingredients in a cake after it's already been baked. With stereo we've effectively got two elements rather than just a single whole cake, which presents opportunities to unpick and rearrange the mix but, we can only unpick it to a limited extent and even what is unpicked can only be rearranged to a limited extent. Mostly this is accomplished by changing phase relationships, which as KeithEmo stated produces fairly unpredictable FR interactions rather than the FR interactions and other transfer functions actually appropriate to the instruments' new spatial positions. The results are surprisingly good, on a superficial level but fall apart on closer inspection. What's interesting about the "Indoor" tool I posted a video to (post #6525) is that it heralds a new generation of pro audio tools which automatically takes care of all the FR, phase, early reflection and reverb interactions (within a 360deg space) from different audio perspectives. In other words, it is now possible to automatically create a convincing transfer function appropriate to different/new spatial positions. However, it requires individually processing each element of the mix and therefore only solves half of this particular problem, as we can't yet un-mix a stereo mix and get at all those individual elements.
 
G

 

Users who are viewing this thread

  • Back
    Top