How do you measure sound stage?

Feb 26, 2024 at 6:10 AM Post #91 of 896
Vinyl has not so great channel separation.

If you’re basing a theory on something that should be indistinguishable being distinguishable, then before you start asking why, you need to prove that such a situation actually exists. Can you point to any reputable controlled tests that indicate that might be the case?
 
Feb 26, 2024 at 6:36 AM Post #92 of 896
Clue offered: SECONDARY DEPTH CUES are embedded into the music itself as part of the mix. PRIMARY DEPTH CUES are reflections off the walls and timing changes due to distance caused by room acoustics. The mix is what the sound engineers made it. It can have tons of reverb or be dry- depending on the aesthetic choices of the sound mixer. The room adds a spatial envelope to the physical sound-
How official are these terms of primary and secondary cues? Shouldn't the cues in the mix be called primary since they will be the same for everyone while the cues created by the acoustic properties of the listening environment vary from listener to listener? Just questioning the terminology (which in acoustics and audio can be quite badly established). That's why I prefer to used terminology such as spatial cues in the mix and spatial cues generated by the listening room.

In the end our spatial hearing is not supposed to be able to tell apart these different layers of cues. If I am listening to a Symphony (say Elgar's second), I don't want to hear it as a recording with spatial cues played in my small listening room. I want to hear it as if I were in a large music hall. Of course the spatial cues in the recording and the acoustics of my listening room aren't perfect and I don't get that 100 %, but I may get something toward that, especially if the spatial cues in the recording go well with the acoustic properties of my room (optimal mixture of direct sound, early reflections and hall reverberation).
 
Feb 26, 2024 at 6:42 AM Post #93 of 896
Vinyl has not so great channel separation.
Exactly! If good soundstage was just about large channel separation, vinyl soundstage would suck. However, a lot of people do not think that way.
 
Feb 26, 2024 at 6:56 AM Post #94 of 896
Vinyl soundstage isn’t as good as CD. Especially at the inner grooves. That’s because of reduced channel separation.
 
Feb 26, 2024 at 8:14 AM Post #95 of 896
1. Depends on the exact DAC and the test conditions, assuming you’re referring to human hearing. In most cases the answer is “no”.
In what circumstances would the answer be “yes”?
 
Feb 26, 2024 at 8:26 AM Post #96 of 896
In what circumstances would the answer be “yes”?
A NOS DAC with a slow roll off at the top. A DAC with some sort of DSP or weird filter engaged. A DAC that is deliberately colored to some sort of house sound. (I’ve never run across one like this, but I’m told they exist in super high end audiophool circles.) A DAC that is defective or is being used improperly.
 
Feb 26, 2024 at 8:38 AM Post #97 of 896
A NOS DAC with a slow roll off at the top. A DAC with some sort of DSP or weird filter engaged. A DAC that is deliberately colored to some sort of house sound. (I’ve never run across one like this, but I’m told they exist in super high end audiophool circles.) A DAC that is defective or is being used improperly.
Interesting. So with those exceptions noted, your view is that all other DAC’s sound exactly the same. Audibly transparent, regardless of price. Is that correct?
 
Feb 26, 2024 at 8:46 AM Post #98 of 896
They should, because 16/44.1 is audibly transparent. So if they sound different, they aren’t producing sound to spec, and something must be wrong with them. I’ve done my own controlled listening tests and verified that every DAC and player I have sounds the same, from an Oppo HA-1 all the way down to a $40 Walmart DVD player.
 
Feb 26, 2024 at 9:23 AM Post #99 of 896
@knownothing2
This thread blew up quick.

My nagging question reading your argumentation is this: why do you need an algorithm for this question?

This whole proposal strikes me as a no-true-scotsman fallacy at worst and a god-in-the-gaps agnostic argument at best because the basis on which the analysis would be conducted must first be demonstrated to exist given all variables are controlled.

So why do you need an AI to conduct an analysis like this? A null test would be sufficient to prove the premises of your argument. The research you pulled up is primarily for the development of robotics, to help bring robots up to the ability of humans to localize based on sound cues.

The procedure here seems to me to be this: prove first that there are any possibly significant differences between two or three experimental factors after controlling for the variables to establish the validity of your premise, then actually prove those differences are significant in a double blind test. At that point the experimental factor could shift to a robot's ability to aurally localize vs humans.
 
Feb 26, 2024 at 10:16 AM Post #100 of 896
Did anyone read this article? I don't mean quote mine to cherry pick support for your existing firmly held views. I mean read the article.

I might just eventually read it for my own interest. But if someone else has read it (or plans to), I'd be interested in discussing whether/how it informs the discussion in this thread.

Debates about what you guess it says are valueless for me.
I now have read a significant part of the article (from the start to a bit into the part headed "Methods"). It sure is very interesting.

To the question whether/how it informs the discussion in this thread: Maybe in some ways but not like it gives a direct and conclusive answer or without some additional kinds of experiments.

The experiments done are with a limited number of sounds that are mostly created in a virtual room (or for some experiments a virtual anechoic chamber), where reflections (up to 0.5s) are added, and then all the resulting sounds arriving at the subjects position (the subject being the "artificial listener") are binauralised using the HRIR of a dummy head with torso.
(Next to the head and torso represented by the HRIR the "Artificial listener" roughly consists of a virtual cochlea and then the actual neural network.)

So in itself it is based on localizing a limited number of sounds in the environment the natural way, wich is something different from localizing sound in a (normal) stereo recording played over speakers (for example).
But you could of course present the virtual listener with a binauralised signal based on two loudspeakers in the virtual room, that are playing a recording with a limited number of sounds panned somewhere between the speakers, and just see what happens! Maybe the algorithm will simply identify 2 sound sources (the 2 loudspeakers) if it is not "susceptible" to the stereophonic illusion. Or maybe it is "susceptible" to the stereophonic illusion and it will identify sound sources in between the speakers.
(But standard panning is just one of many "stereo tricks", so this is just scratching the surface in a way.)
For this experiment one could of course send the stereo recording used through a DAC and ADC before going into the virtual room.
(For the sake of this thread discussion. I don't seriously see how it could affect anything unless the DAC had very serious issues. Let me note here that in the experiments they also added sources of noise in the virtual room, with SNR ranging from 5 to 30 dB, hard to see how DAC imperfections below -90 dB or -100 dB could be a problem if those noise sources were not. And channel synchronisation of DACs should be good enough by a large margin I think.)
Another way to test the influence of different DACs could be by just using one channel of the DAC to generate one of the sounds to be localised in the standard experiments (not concerning the stereophonic illusion). But then the most likely (or rather least unlikely) potential cause of problems by the DAC - any differences in handling the left and right channels - would not play a role.

Anyway, regardless of the relevance to this thread discussion I find the article itself very, very interesting.
The model shows many properties (and limitations) similar to those of human hearing.
 
Feb 26, 2024 at 11:15 AM Post #101 of 896
Vinyl soundstage isn’t as good as CD. Especially at the inner grooves. That’s because of reduced channel separation.
Soundstage is a complex mixture of many things. If maximizing channel separation gave the best soundstage, hard panned ping pong stereo recordings from late 50s and early 60s would have the best soundstage by far. I don't think so...

CD has far superior channel separation compared to vinyl, but that superiority somehow vanishes when it is about the perception of soundstage. In fact, vinyl actually protects a little bit from excessive unnatural channel separation created for example by hard panned instruments.
 
Feb 26, 2024 at 11:52 AM Post #102 of 896
They should, because 16/44.1 is audibly transparent. So if they sound different, they aren’t producing sound to spec, and something must be wrong with them.
I struggle with this concept. The notion that 16/44.1 is audibly transparent therefore any DAC* that supports 16/44.1 is by definition also audibly transparent. A DAC’s ability to handle 16/44.1 audio doesn’t ensure it provides an audibly transparent output. The specification (16/44.1) is a baseline for compatibility, not a measure of output performance quality.

My subjective experience with many different DAC’s tells me that DAC’s don’t all sound identical. I would assume this is down to differences in the design & implementation of the DAC chip, the analog output stage, the power supply etc etc all contributing to the extent to which a DAC can achieve audibly transparent performance.

*noting your previous exceptions.

As a side note, why did you buy the HA-1?
 
Feb 26, 2024 at 1:04 PM Post #104 of 896
@knownothing2
So why do you need an AI to conduct an analysis like this? A null test would be sufficient to prove the premises of your argument. The research you pulled up is primarily for the development of robotics, to help bring robots up to the ability of humans to localize based on sound cues.
@KinGensai - this started as a thought exercise that eventually got me shut off from the thread on ASR. The thread was titled something like :”Why isn’t the soundstage from a DAC measurable?”

Here is my logic: I and other audio enthusiasts occasionally perceive differences in soundstage reproduction in our systems when we change gear, including DACs. Proponents of standard bench top measurements like those employed by John Atkinson in Stereophile and Amir on ASR announce with tremendous certainty that such observations of differences in soundstage are impossible, especially with DACs because if they measure well in standard tests, they cannot possibly “sound different”. Furthermore, complicated pseudo effects like soundstage are immeasurable, and why would you bother trying because there is no engineering basis for why they should have an effect on soundstage. And on and on and on like that.

To respond to this logic, I started looking for the scientific basis of sound localization in humans and came across the recent article in Nature from the researchers at MIT. I started wondering if there was a way to modify or adapt their approach and algorithms to design an empirical test of sound localization in a stereo reproduction system and room setting holding all variables constant except for the DAC. This is just a thought exercise at this point.

One of the ASR members responding to my idea in the thread suggested that I just run a human blind AB test with different DACs, all other variables fixed and employ multiple human subjects. Then in the same sentence they suggested I be prepared to have any results severely challenged by ASR members if the test resulted in statistically different perceptions of soundstage by the human subjects (I guess because that would be considered impossible based on first engineering principles in the view of ASR adherents). My solution was to suggest a pairing of a “machine test” and a human trial to come at this question from two angles.

Full disclosure, I find it appealing that if there existed some objective measure of soundstage attributes, it could then be applied to all elements of the stereo reproduction chain, including cables. I know that thought is both scary and ridiculous to some hifi objectivists, but my personal experience is that swapping speaker cables in my system made one of the biggest improvements in how I perceive soundstage. I would love to try to prove it to myself that I am not just experiencing expectation bias, and that mine and others ears can detect this effect.

I have conducted simple blind AB tests with friends and drafted family members comparing components and the results were generally conservative and unanimous, either confirming no difference, or confirming a very clear difference, depending on the gear evaluated. Soundstage may have been one of the parameters reported by the subjects, but it was not the focus, nor would I call the results statistically rigorous. YMMV

kn
 
Feb 26, 2024 at 1:19 PM Post #105 of 896
@knownothing2
Precluding NOS DACs, did you control for selecting the exact same anti-image filter? Most of the changes between DACs are extremely far below audibility, but filters affect the impulse response and can cause a significant enough difference to affect post-impulse amplitude. Minimum phase filters cause prolonged post-impulse ringing in certain implementations (https://mrapodizer.wordpress.com/2011/08/16/technical-analysis-of-the-meridian-apodizing-filter/) that can mess with reverb spatial cues already in the track. Whether or not there is a significant difference in the delta comparison of minimum vs linear filters, it makes enough of a difference to irritate me, so I choose linear filters whenever possible.

For your analysis, this has to be controlled for to eliminate this variable.
 

Users who are viewing this thread

Back
Top