What creates soundstage in headphones??
Jul 8, 2020 at 10:47 PM Post #226 of 288
3. No it doesn't! We can have a lot of reverb in a small room (say a tiled toilet for example) and relatively little reverb in a large room (say a cinema for example) yet our hearing perception is NOT fooled into believing a small toilet is larger than a cinema!

Yes yes. The temporal structure of reverberation matters. Small toilet can't be a larger because the reflections come so fast.
 
Jul 8, 2020 at 11:23 PM Post #227 of 288
2a. No, it shouldn't, because this is the Sound Science forum, not the "What 71dB perceives" forum! And as already cited/established, most people perceive "lateralisation", NOT a miniature soundstage.
Isn't it scientifically interesting that people hear so differently? Some find crossfeed beneficial, others don't. Sound Science forum ultimately says most music just does not work on headphones because it's mixed for speakers. Well, I say to that I MAKE IT WORK and use crossfeed and no matter how much you try to explain science doesn't support crossfeed, it was my studies in the university which made me realize crossfeed could improve headphone sound for me and it did. My big mistake was to assume what works for me works for everybody.
 
Jul 8, 2020 at 11:33 PM Post #228 of 288
1. Yes it does but in the case of the headphone reproduction of a mix intended for speaker reproduction then you're missing the distance cues of the speakers/listening environment. 1a. No, it's not! The research into spatial effects with headphones is either to achieve a full soundfield (with sounds located at any distance) or in the case of something like the Smyth Realizer, to produce a full soundstage in front of you (like 2 stereo speakers in a listening room).
1b. But as no one is trying to create a "miniature soundstage" how can it be "done extremely well"? And, even if you personally are experiencing what you call a "miniature soundstage" how is it hard to tell it apart from a full/real sound stage? Why call it "miniature" if it sounds the same to you as a full soundstage?

by better miniature soundstage I mean "speaker-soundstage". If Smyth Realizer works it sounds like speakers. miniature soundstage is so large and good it is actually soundstage. Please try to understand my English it may be clumsy because I am a Finn and English is not my first language so expressing these things is VERY difficult for me. so can't you just try to understand what I mean?
 
Jul 9, 2020 at 4:12 AM Post #229 of 288
Headphones don't produce soundstage, large, small or miniature. As I've said a dozen or more times, they don't present primary distance cues, which are essential for soundstage. it isn't soundstage if it goes through the center of your head.

If signal processing becomes more common, inexpensive and sophisticated maybe. But we aren't there yet.
 
Last edited:
Jul 9, 2020 at 10:33 AM Post #230 of 288
Headphones don't produce soundstage, large, small or miniature. As I've said a dozen or more times, they don't present primary distance cues, which are essential for soundstage. it isn't soundstage if it goes through the center of your head.

If signal processing becomes more common, inexpensive and sophisticated maybe. But we aren't there yet.

As an electric engineer I think about this differently. I think about this as a chain of transfer functions. If something (such as a 7 dB notch filter around 400 Hz or spatial cues) is being introduced earlier in the chain, removing blocks later doesn't remove thise things. Recordings don't contain "accurate" spatial information because of the way they are produced, but they contain a "montage" of different kind of spatial cues mixed to work well with speakers in a room. This can work, because spatial hearing can be fooled in certain ways. That's a huge requirement for stereo sound to make sense, to have a soundstage with only two sound sources. This has it's limitations and that's why we have multichannel systems to mitigate these limitations, but a lot of people are completely used to live with the limitations of stereo sound, often not even realizing such limitations exist.

Speakers in a room create spatial cues of distance. Our ears don't measure these distances with a measuring stick, instead it decodes the spatical cues. The creation of these cues is an acoustic phenomenon in 3D space, but they get encoded into 1D information of air pressure changes for our eardrums. Theoretically this can be simulated with sophisticated enough signal processing. The key point here is physical distance is not an absolute requirement, IF we can simulate the resulting spatial cues otherwise accurately enough.

Headphones in our head create also spatial cues of distance, but in this case the distace is extremely small! An inch perhaps. The overall ILD for example is so huge it's a strong spatial cue for sound sources right at our ears. Now if we "break" the headphones into two halves and start moving the parts further from our ears the sound get of course quieter, but also the spatical cues change and the sound doesn't sound so near anymore. We can image moving the drivers to were speakers would be, 10 feet away or so. If the sound wasn't almost inaubibly quiet, the spatical cues would be now similar to speakers. Now, if we think about transfer functions, this "moving sound sources from your head to where speakers would be" can be theoretically done earlier in the chain of transfer functions. It could be in the recording itself! In fact binaural recordings are more or less like this. Since binaural recording make little sense with speakers (spatial cues get "doubled"), most recordings are not like this. Most recordings assume spatial cues of distance gets added in the playback.

Recordings are mostly produced to work well with speakers. So the "montage" of different kind of spatial cues in the recording gets convoluted with the spatial cues of speakers in a room at the distance of 10 feet or so. With headphones the "montage" of different kind of spatial cues in the recording gets convoluted with the spatial cues of sound sources one inch away. Clearly this is a problem. In my opinion this highlights the problems of the "montage" like spatiality of the recording while speakers in a room hide/softens them. What if we mitigate this problem a little bit? If we lower the ILD for example, we weaken the spatial cues of very near sound and modify them to be closer to spatial cues (ILD-wise) of more distant sounds and we assume our spatical hearing gets fooled more or less by this "buchering" of spatial cues? Since in transfer function chain I can do this reduction of ILD before the sound goes to my headphones, I can use a crossfeeder to do that. It turned out this works for me! The result is a miniature soundstage. Not a headstage or speaker soundstage, but something in between.
 
Jul 9, 2020 at 11:32 AM Post #232 of 288
I've seen this movie before. It doesn't end well.
6d2.png
 
Jul 9, 2020 at 1:13 PM Post #233 of 288
I suggest those in the debate read a few papers regarding spatial hearing.

L. Rayleigh, (1907). "On Our Perception of Sound Direction," Philosophy. Mag., vol. 13
Blauert, J. (1969-70). Sound Localization in the Median Plane, Acustica, Volume 22, pp. 205-213.
J. C. Middlebrooks, E. A. Macpherson, and Z. A. Onsan (2000). “Psychophysical customization of directional transfer functions for virtual sound localization
Blauert, J., Spatial hearing: the psychophysics of human sound localization, MIT press, 1997
 
Jul 9, 2020 at 5:24 PM Post #234 of 288
Any dealing with the physics of spacial cues?
 
Jul 9, 2020 at 5:35 PM Post #235 of 288
I've started reading this little pudding of research some time ago(free download, but people still have to actually read it without falling asleep. I seem pretty bad at that game):
https://www.frontiersin.org/researc...isteners-what-is-the-role-of-learning-and-mul

And "Sound Reproduction The Acoustics and Psychoacoustics of Loudspeakers and Rooms" by Floyd Toole, would surely make a case that soundstage is a rather specific notion about stuff perceived in front of us(the stage...). IMO the general lateralization effect of headphones acts against the idea of soundstage. At least defined that way.
 
Jul 9, 2020 at 9:19 PM Post #236 of 288
That’s the way I’ve always seen soundstage defined.
 
May 16, 2022 at 6:40 PM Post #237 of 288
JMEDITIONFRACTALMESH.jpg





1Untitled.png
MIT-Kagome-Metal-01_1.jpg

24B1EE96-8414-4CF0-AEB3-11898B6C5EE1.jpeg
d0nr08558h-f3.gif



https://www.researchgate.net/public...Generation_and_acoustic_scattering_prediction

Fractal surfaces: Generation and acoustic scattering prediction. https://asa.scitation.org/doi/10.1121/1.4783953

this is the key to making headphones less in your head I think while not touching any frequency db (no muting)
quantum material


"Physicists discover new quantum electronic material - atomic structure resembling a Japanese basketweaving pattern, “kagome metal” exhibits exotic, quantum behavior."







Reexamining the Mechanical Property Space of Three-Dimensional Lattice Architectures
http://faculty.washington.edu/lmeza...of-3D-Lattice-Architectures-LR-Meza-et-al.pdf








"Searching for kagome multi-bands and edge states in a predicted organic topological insulator"

and
Moiré superlattices at the topological insulator Bi2Te3

(a–c) STM topography images of a Moiré superlattice at the Bi2Te3 surface. Image sizes are 60 × 60 nm2, 20 × 20 nm2 and 10 × 10 nm2, respectively. Tunneling setpoints are 1.1 V and 0.13 nA, 1.1 V and 0.5 nA and 0.1 mV and 0.2 nA, respectively. (d–f) Corresponding Fourier-transform images. Image sizes are 0.7 × 0.7 nm−2, 5.3 × 5.3 nm−2 and 9.1 × 9.1 nm−2, respectively. (g) Schematic model structure of three Bi2Te3 QLs (viewpoint: parallel to the QLs). Green (pink) spheres represent Te (Bi) atoms. (h) Simulated Moiré pattern with an in-plane rotation of 1.2°. A smoothing filter was applied to better resolve the Moiré patterns. Left inset: Close-up view of the simulated Moiré pattern prior to smoothing. Red (purple) circles belong to the first (second) set of three atomic layers. Right inset: Simulated pattern after applying a more heavy smoothing filter.
1652737676302.png






https://www.head-fi.org/showcase/drop-hifiman-r7dx-jm-ocd-edition-mod.25830/reviews
 
Last edited:
May 16, 2022 at 7:48 PM Post #238 of 288
Your simulated data doesn't take into account that the sound source is pointing at a shell shaped lump of flesh called an ear, and the sound is going in to your head through a hole in your skull the size of your baby finger. Assuming that the opening of your ear canal is exactly at 90 degrees in your simulation, that spot right at 90 degrees is the only spot that really matters. The way the sound reflects inside the cans doesn't matter because the space in there is so tiny and simultaneously obstructed and channelled by the flesh of your ear... not to mention that everyone's ear and ear canal is shaped differently. The amount that sound is scattered by an individual's unique physiognomy would completely dwarf the effect of the direction that threads go in a fabric. You're defining a beach by a single grain of sand.

Soundstage is created in the mix by means of sound element placement and secondary depth cues baked into the mix. It depends on the actual physical distance of the speakers from the listener and the sympathetic acoustics of the room to work properly. Listening to commercially recorded music with headphones isn't at all like the effect of actual physical space on a loudspeaker system. In order to have soundstage, you have to account for the element of time- the delay between direct and reflected sound reaching your eardrum. Ear cups on cans are too small to do that. You need signal processing to simulate the time based delays and reflections.

The problem is that audiophiles misuse the term soundstage. They talk about soundstage with IEMs that are shoved directly into the ear canals with no space at all around them. A few millimeters in over the ear cans isn't any better. Looking at a diagram of a little ear cup as if it was a full sized living room or a concert hall is wrong headed.
 
Last edited:
May 16, 2022 at 7:50 PM Post #239 of 288
this is the key to making headphones less in your head I think while not touching any frequency db (no muting)
Sorry but no. The key to getting headphone sound out of your head is personal hrtf based binauralisation with fitting personalised headphone compensation. Either by personal hrtf based binaural simulation of loudspeakers or by personal hrtf based direct binaural rendering of sounds.
If/when you hear sound outside your head while playing traditional stereo recordings over headphones it is most likely due to a coincidental match of unintended spectral cues - possibly changed by the frequency response of the headphones - with your personal hrtf in combination with coincidental matching ILD and ITD cues and/or secondary spatial cues.
The differences in "soundstage" between different headphones with normal stereo recordings are neglectable compared to the leap forward using proper personal hrtf based input signals even with some of the cheapest headphones.
Try HeSuVi with a personal hrtf based preset created with Impulcifer, or try a Smyth Realiser with a personal PRIR and HPEQ measurement and it will become obvious. You will hear virtual loudspeakers located pinpoint precise at the position where you measured them at the distance from where you measured them.
 
May 16, 2022 at 7:53 PM Post #240 of 288
Last edited:

Users who are viewing this thread

Back
Top