Correcting for soundstage?

Discussion in 'Sound Science' started by oblongdarts, Nov 15, 2017.
2 3
Next
 
Last
  1. oblongdarts
    I see a new product being advertised and I was wondering about your take on it. I'm driven to post in this forum because "tube magic" is mentioned four times on the product description page and it seemed hokey. Based on my reading, it seems that more scientific explanations prevail here than elsewhere.

    The company claims their new amp "corrects the fundamental spatial distortion in recordings" and "increases the width of the soundstage beyond that of the speaker placement." At the push of a button, an extra 30 degrees of soundstage can be recreated.

    Question 1: Is there a fundamental spatial distortion in recordings?

    Question 2: What is happening to my ears when (if) I experience depth or width to a recording? I believe that I do experience a soundstage effect that differs between headphones, but I'm at a loss to describe what is happening to the audio or to my perception of the audio. It is similar to the difference I hear between open back and closed back headphones.
     
    Last edited: Nov 15, 2017
  2. oblongdarts
    It seems that if you're adding distortion as an effect, then it's not exactly "restoring" the recording. I don't see how a product can do both.
     
  3. NA Blur
    Phase, crosstalk, and how loud each frequency is relative to one another all contribute to soundstage. When I see anyone or any product claim to vastly increase soundstage I highly suspect phase or crosstalk problems which are more trickery than helping the original signal.
     
  4. bigshot
    There are DSPs that can increase the size and definition of the sound field in multichannel speaker setups. I have a Yamaha AVR, and it includes a Stereo to 7.1 DSP that processes the stereo signal to create a center channel and rear ambience calculated to create the ambience of a larger space and fill the room with sound. The end effect is a larger and more defined soundstage than just playing stereo through two speakers. It's a significant improvement over straight two channel.

    I've heard several filters and processes designed for headphones, including the newest kind of 3D Headphone mix used on the recent Kraftwerk Atmos Catalogue blu-ray. But I have to say that none of them created as big of an improvement than with a multichannel speaker setup. I guess that's to be expected though, because soundstage really doesn't exist in headphones, only with speakers where the room defines the space and allows the sound to fill it and bloom. Headphones are just tiny speakers pressed up against your ears. Hard to get any semblance of space into the space between your ears... unless there is a space there already!
     
    HAWX likes this.
  5. oblongdarts
    That could explain a lot. :beyersmile: By the way, I've been enjoying your videos. Thank you for posting them in your profile.

    What you describe makes sense. The product *is* for speakers, but their graphic shows only two speakers. And somehow, they manage to push out a 30 degree wider soundstage. Seems tricky.
     
  6. 71 dB
    Answer 1: Yes, almost all stereophonic recordings have, because they are produced for speakers, not for headphones. I use crossfeed to reduce/remove spatial distortion in headphone listening.

    Answer 2: Not so much in your ears, but your brain is "decoding" the spatial cues that entered your ears. Our hearing can be fooled quite easily to experience width, but for strong depth experience the spatial cues must be very accurate and according to your hearing (how your body, neck, head, ears etc, shape the sound) so it's difficult.
     
  7. oblongdarts
    Thank you. Very interesting. If only I could find a book that could lay this out for me. I need a class or something. I've been tooling around Youtube and just don't know what's trustworthy. A little gun-shy after buying (literally) into too much audiophile hype.
     
  8. jgazal

    Question 1

    Fundamental

    There would be something fundamentally wrong if all types off recordings suffered from such unique distortion. But each kind of recording has its own peculiar set of distortions.

    Spatial distortion

    Spatial distortion could refer to:

    a) what makes stereo recordings played back with loudspeakers different from stereo recordings played back with headphones;

    OR

    B) what makes our perception while listening to real sound sources different from listening virtual sound sources reproduced via electro-acoustical transducers.​

    Even if we define spatial distortion as the difference in perception of space of a real sound versus a reproduced sound, chances are every recording format played on every system would suffer spatial imprecision varying by degree, as described below.

    But do not interpret the word distortion in this post as negative alteration, because although varying by degree, they are not a "fatal" types of distortion, as recordings were made with full knowledge that the reproducing systems probably were not going to match the creative environment exactly. Perhaps with cheaper digital signal processing we have a chance to even both environments.

    Recordings

    As per item 1 you need to know how your recording was made and what level of spatial precision you want to achieve, because you need to know which kind of distortion you need to address.

    A layman multimedia guide to Immersive Sound for the technically minded (Immersive Audio and Holophony)

    Before you continue to read (and I don’t think anyone will read a post so long), a warning: the following description may not be scientifically and completely accurate, but may help to illustrate the restrictions of sound reproduction. Others more knowledgeable may chime in, filling in the gaps and correcting misconceptions.

    First of all you need to know how your perception works.

    How do you acoustically differentiate someone stroking a piano 440hz note from someone blowing an oboe and pressing its 440hz key?

    It is because the emitted sound: a) has the fundamental frequency accompanied by different overtones, partials that are harmonic and inharmonic to that fundamental, resulting an unique timbre (frequency domain) and b) has an envelope with peculiar attack time and characteristics, decay, sustain, release and transients (time domain).

    When you hear an acoustical sound source how can your brain can perceive its location (azimuth, elevation and distance)?

    Your brain uses several cues, for instance:

    A) interaural time difference - for instance sound coming from your left will arrive first at your left ear and a bit after at your right ear;​

    B) interaural level difference - for instance sound coming from your left will arrive with higher level than at your right ear;

    C) spectral cues - sound coming from above or below the horizontal plane that cross your ears will probably have not only a fundamental frequency a complex set of partials and then your outer ear (pinna) and your torso will change part of those frequencies in a very peculiar pattern related to the shape of your pinna and the size of your torso;


    D) head movements - each time you make a tiny movement with your head you change the cues and your brain track those changes according to its head position to solve ambiguous cues;​

    E) level ratio between direct sound and reverberation - for instance distant objects will have lower ratio and near sources will have higher ratio;​

    F) visual cues - yes visual cues and sound localization cues interact in the long and the short term;​

    G) etc.​

    A, B and C are mathematically described as a head related transfer function - HRTF.

    Advanced “watch it later” - if you want to learn more about psychoacoustics and particularly about the precedent effect and neuroplasticity, watch the following brilliant lectures from Chris Brown (MITOpenCourseWare, Sensory Systems, Fall 2013, published on 2014):


    So you may ask which kind of recording is able to preserve such cues (spatial information that allows to reconstruct a lifelike soundfield) or even how to synthesize them accurately.

    One possible answer to that question could be dummy head stereo recordings, made with microphone diaphragms placed were each eardrum should be in a human being (Michael Gerzon - Dummy Head Recording).

    The following video from @Chesky Records contains a state of the art binaural recording (try to listen at least until the saxophonist plays around the dummy head).

    Try to listen with loudspeakers in the very near field at more and less +10 and -10 degrees apart and with two pillows one in front of your nose in the median plane and the other at the top of your head to avoid ceiling reflections (or get an IPad Air with stereo speakers, touch your nose in the screen and direct the loudspeakers sound towards your ears with the palm of your hands; your own head will shadow the crosstalk).​


    Just post in this thread if you perceive the singer displacing his head while he sings and the saxophonist walking around you.

    However, each human being has an idiosyncratic head related transfer function and the dummy head stays fixed, while your listener turn his/her head.

    What happens when you play binaural recordings through headphones?

    The cues from the dummy head HRTF and your own HRTF cues don’t match and as you turn your head cues remain the same and the 3D sound field collapses.

    What happens when you play binaural recordings through loudspeakers without a pillow or a mattress between the transducers?

    Imagine a sound source placed to the left of a dummy head in an anechoic chamber and that, for didactic reasons, a very short pulse, coming from that sound source is fired into the chamber and arrives at the dummy head diaphragms. It will arrive first at its left diaphragm and after and lower in its right diaphragm. End of it. Only two pulses recorded because it is an anechoic chamber with fully absorptive walls. One intended for your left ear and the other for your right ear.

    When you playback such pulse in your listening room, first the left loudspeaker fires the pulse into your listening room, it arrives before at your left eardrum, but also after and lower at your right eardrum. When the right loudspeaker fires the pulse (the second arrival at the right dummy head diaphragm), it arrives first at you right ear and after and lower at your left ear.

    So you were supposed to receive just two pulses, but you end receiving four pulses. If you now give up the idea of very short pulse and think about sounds, you can see that there is an acoustic crosstalk distortion intrinsically related to loudspeakers playback.

    Since the pinna filtering from the dummy head fired into the listening room and interact with its acoustics, even not attempting to tackle acoustic crosstalk, there is a tonal coloration, that engineers try to compensate to make such recording more compatible with loudspeakers playback:


    To make things worse there are early reflections boundaries in typical listening rooms. Early reflections arrive closely enough at your eardrum to confuse your brain. More “phantom pulses” or distortions that you have in the time domain. A short video explaining room acoustics:


    There is also one more variable, which is speaker directivity. One concise explanation:


    A rather long video, but Anthony Grimani, while talking at Home Theater Geek by Scott Wilkinson about room acoustics, gives a good explanation about speaker directivity (around 30:00):


    Higher or lower speaker directivity may be preferred according to your aim.

    Finally, the sum of two HRTF filterings (the dummy head and yours) may also introduce comb filtering distortions (additive and destructive interactions of sound waves).

    With "binaural recordings to headphones" and "binaural recordings to loudspeakers" resulting no benefits, what could be an alternative?

    Try something easier than a dummy head, such as ORTF microphone pattern:


    For didactic reasons I will avoid going deeper into diaphragm pick up directivity and the myriad of microphone placing angles that one could use in a recording like this.

    So for the sake of simplicity, think about such ORTF pattern, as depicted above: just two diaphragms spaced at the average size of an human head so you can skip mixing and record direct to the final audio file.

    Spectral cues are gone. Some recording engineers place foam disks between the microphones to keep the ILD closer to what would happen with human heads.

    An example:


    What happens when you play such “ORTF direct to audio file” with loudspeakers? You still have an acoustic crosstalk, but you have the illusion that the sound source is between the loudspeakers and at your left.

    What happens when you play such “ORTF direct to audio file” with headphones? You don’t have acoustic crosstalk distortion, but ILD and ITD cues from the microphone arrangement don’t match your HRTF and as you turn your head cues remain the same and the horizontal stage collapses.

    Are there more alternatives?

    Yes, there are several. Some are:

    A) Close microphones to stereo mix.

    Record each track with a microphone close to the sound source and mix all of them into two channels using ILD to place them in the horizontal soundstage (panning - pan pot).

    Reverberations from the recording venue need to be captured in other two tracks from an microphone arrangement that allow then preservation of such cues an that is mixed into those two channels.

    The ILD of each instrument track (more precisely the level such track will have the right and left channel) and the ratio between instrument tracks and reverberation tracks is chosen by the engineer and do not necessarily match what a dummy head would register if all instruments were playing together around it during the recording.

    You still have acoustic crosstalk when playing back with speakers and soundstage will collapse with headphones.

    In this case, if the ILD/ITD levels are unnatural, when using headphones, as @71 dB advocates, adding electronic crossfeed may avoid the unnatural perception that sound sources are only at the left diaphragm, at the right diaphragm and right in the center:

    How many recordings have unnatural ILD and ITD? IDK.

    B) Mix multichannel with level panning.
    @pinnahertz describes some historical facts about multichannel recording here.

    Even with more channels, there is still some level of acoustic crosstalk when playing back through loudspeakers in regular room, unless all sounds/tracks are hard-panned.

    You just can’t convey proximity as one would in reality (a bee flying closely your head; or the saxophonist from Chesky recording above if the speakers are placed further than the distance the saxophonist was playing when he was recorded...).

    C) Record with Ambisonics.

    This is interesting as the recording is made with a tetrahedron microphone (or an eigenmike, as close as possible to measure a sound field at a single point) and the spherical harmonics are decoded to be played back into an arrangement of speakers around the listener.

    So with Ambisonics the spatial effect is not derived from two microphones but at from at least four microphones that encode height spatial information (The Principles of Quadraphonic Recording Part Two: The Vertical Element By Michael Gerzon) and the user HRTF is acoustically filtered at playback.

    There are two problems.

    The first one is that you need a decoder.

    The second one is that at high frequencies the math proves that you need too many loudspeakers to be practical:

    Although the spherical harmonics seem more mathematically elegant, I still do not figured out how acoustic crosstalk in listening rooms - or instead the auralization with headphones without adding electronic crosstalk - affects the possibility of conveying sound fields and proximity, neither if crosstalk cancellation in high order ambisonics ready listening rooms with high directivity loudspeakers is feasible.

    Let’s hope third order ambisonics, eigenmikes and clever use of psychoacoustics are good enough! See item 2.C of the post #2 below or here to have an idea of such path.

    If ambisonics spherical harmonics decoding already solves acoustic crosstalk at at low and medium frequencies, then the only potential negative variables would be the number of channels for high frequencies and the listening room early reflections. That would be in fact an advantage of convolving a high density/resolution HRTF/HRIR (when you can decode to an arbitrary higher number of playback channels) instead of interpolated low density/resolution HRIR or BRIR (of sixteen discreet virtual sources for instance), when binauralizing ambisonics over headphones. Current, state of the art, example of High-Order-Ambisonics-to-binaural rendering DSP:

    Problem is that, currently, HRTF are acoustically measured in anechoic chambers, a costly and time consuming procedure:

    In the following video Professor Choueiri demonstration of a high density/resolution HRTF acquisition through acoustical measurements:

    That is one of the reasons why this path may benefit from easier ways to acquire high density/resolution HRTF such as capturing biometrics and searching for close enough HRTF in databases:

    Fascinating research in 3D Audio and Applied Acoustics (3D3A) Laboratory at Princeton University:


    D)
    Wavefield synthesis.

    This one is also interesting, but complex as the transducers tend to infinity (just kidding, but there are more transducers). You will need to find details somewhere else, like here. And it is obviously costly!

    E) Pure object based.

    Record each sound source at its own track and don’t mix them before distribution. Just tag them with metadata describing their coordinates.

    Let the digital player at the listening room mix all tracks considering the measured high density/resolution HRTF of the listener (or lower density/resolution with better interpolation algorithms) and room modeling to calculate room reflections and reverberation (“Accurate real-time early reflections and reverb calculations based on user-controlled room geometry and a wide range of wall materials” from bacch-dsp binaural synthesis).

    Playback with crosstalk cancellation (or binaural beamforming with an horizontal array of transducers such as the yarra sound bar; more details forward in this post) or use headphones with head-tracking without adding electronic crossfeed.

    This is perfect for the realms of virtual environments for video games in which the user interact with the graphic and narrative context determining the future states of sound objects.

    What are the problems with a pure object based approach? It is costly and time consuming to measure the HRTF from the listener. It is computational intensive to mix those tracks and calculate room reflections and reverberation. You just can’t calculate complex rooms. So you miss the acoustic signature of really unique venues.

    Atmos and other hybrid multichannel object based codecs use also beds to preserve some cues. But such beds and the panning of objects between speakers also introduce the distortions from the chains before mentioned (unless it also relies in spherical harmonics computation?).

    Going the DSP brute force route to binauralization and to cancel (or avoid) crosstalk


    Before whe start talking about DSP, you may want to grasp how they work mathematically and one fundamental concept to do that is the Furrier transform. I haven’t found better explanations than the ones made by Grant Sanderson (YouTube channel 3blue1brown):


    A. The crosstalk cancellation route of binaural masters

    So what Professor Edgar Choueiri advocates?

    Use binaural recordings and play them back with his Bacch crosstalk cancellation algorithm (his processor also measures a binaural room impulse response to enhance his filter and use headtracking and interpolation to relieve head movement range restrictions one would have otherwise).

    Before we continue with Bacch filter, a few notes about room impulse response and a recovery of speaker directivity.

    There is a room impulse response for each (Length x Width x Height) coordinates of a given room. A RIR can be measured by playing a chirp sweep from 20hz to 20khz from a source in a given coordinate. The microphone will capture early reflections and reverberation at another given coordinate. Change those source and microphone coordinates/spots and RIR is going to be different. Room enhancement DSPs use RIR to compute equalization for a given listening spot (Digital Room Equalisation - Michael Gerzon), but if you want even bass response across more listening spots then you probably need more subwoofers (Subwoofers: optimum number and locations - Todd Welti). A BRIR is also dependent of the coordinates in which it is measured (and looking angles!). But instead you measure with two microphones at the same time at the entrance of a human head or dummy head. That is one of the reasons why you need to capture one BRIR for each listening spot you want the crosstalk canceled filter to work. Such BRIR integrates not only the combined acoustic signature of loudspeakers and room, but also the HRTF of the dummy head or the human wearing the microphones.

    Here an speaker with higher directivity may improve the performance of the algorithm.​

    Dummy head HRTF used in the binaural recording and your own HRTF don’t match, but the interaction between speakers/room and your head and torso “sum” (filter) your own HRTF.

    The “sum” (combined filtering) of two HRTF filterings may also introduce distortions (maybe negligible unless you want absolute localisation/spatial precision?).

    Stereo recording with natural ILD and ITD, like the ORTF discribed above, render an acceptable 180 degrees horizontal sound stage. Read about the concept of proximity in the Bacch q&a Professor Choueiri has in the 3d3a of Princeton website.

    Must watch videos of Professor Choueiri explaining crosstalk, his crosstalk cancellation filter and his flagship product:

    Professor Choueiri explaining sound cues, binaural synthesis, headphone reproduction, ambisonics, wave field synthesis, among others concepts:


    B. The crosstalk avoidance binauralization route


    B.1 The crosstalk avoidance binauralization with headphones

    And what you can do with DSPs like the Smyth Research Realiser A16?

    Such processing circumvent the difficulty in acquiring an HRTF (or head related impulse response - HRIR) by measuring the user binaural room impulse responses - BRIR (or also personal room impulse response - PRIR, when you want to say that it refers to the unique BRIR of the listener) that also includes the playback room acoustic signature.

    Before we continue with the Realiser processor, a few notes about personalization of BRIRs. Smyth Research Exchange site will allow you to use your PRIR made with a single tweeter to personalize BRIRs made by other users in rooms you may be interested to acquire. So the performance will be better than just use that BRIR that may poorly match yours.​

    So after you measure a PRIR, the Realiser processor convolves the inputs with such PRIR, apply a filter to take out the effects of wearing the headphones you have chosen, add electronic/digital crossfeed and dynamically adjust cues (headtracking plus interpolation) with headphones playback to emulate/mimic virtual speakers like you would hear in the measured room, with the measured speaker(s), in the measured coordinates. Bad room acoustics will result bad acoustics in the emulation.

    But you can avoid the addition of electronic/digital crossfeed to emulate what beaforming or a crosstalk cancellation algorithm would do with real speakers (see also here and here). This feature is interesting to playback binaural recordings, particularly those made with microphones in your own head (or here).

    The Realiser A16 also allows equalization in the time domain (the latter is very useful to tame bass overhigh).

    Add tactile/haptic transducers and you feel bone conducting bass not affected by the acoustics of your listening room. Note also that the power requirements to feed the headphone cavity are lower than feeding your listening room (thanks to brilliant @JimL11 for elaborating the power concept here) and that the intermodulation distortion characteristics of the speakers amplifier may then be substituted by those from the headphone amplifier (potentially lower IMD).

    In the following (must hear) podcast interview (in English), Stephen Smyth explains concepts of acoustics, psychoacoustics and the features and compromises of the Realiser A16, like bass management, PRIR measurement, personalization of BRIRs, etc.

    He also goes further and describes the lack of absolute neutral reference for headphones and the convinience of virtualizing a room with state of the art acoustics, for instance “A New Laboratory for Evaluating Multichannel Audio Components and Systems at R&D Group, Harman International Industries Inc.” with your own PRIR and HPEQ for counter-filtering your own headphones (@Tyll Hertsens, a method that personalizes room/pinnae and pinnae/headphones idiosyncratic coupling/filtering and keeps the acoustic basis for Harman Listening Target Curve).

    Extra interview at CanJam SoCal 2018 (by @kp297):


    Stephen Smyth at Canjam SoCal 2018 introduces the Realiser A16 (thanks to innerfidelity hosted by @Tyll Hertsens):


    Stephen Smyth once again, but now introducing the legacy Realiser A8:


    Is there a caveat? An small one but still yes, visual cues and sound cues interact and there is neuroplasticity:


    Kaushik Sunder, while talking about immersive sound at Home Theater Geek by Scott Wilkinson, mentions how we learn since our childhood to analyze our very own HRTF and that such analyses is a constant learning process as we grow older with pinnae getting larger. But he also mentions the short term effects of neuroplasticity at 32:00:


    Let’s hope Smyth Research can integrate the Realiser A16 to virtual headsets displaying stereoscopic photographs of measured rooms for visual training purpose and virtual reality.

    B.1 The crosstalk avoidance binauralization with a phased array of transducers

    So what is binaural beamforming?

    It is not crosstalk cancellation but the clever use of a vertical phased array of transducers to control sound directivity resulting a similar effect.

    One interesting description of binaural beamforming (continuation from @sander99 post above):


    Peter Otto interview about binaural beamforming at Home Theater Geek by Scott Wilkinson:


    Unfortunately the Yarra product does not offer a method to acquire a PRIR. So you can enjoy binaural recordings and stereo recordings with natural ILD and ITD. But you will not experience precise localization without inserting some personalized PRIR or HRTF.

    Living in a world with (or without) crosstalk

    If you want to think about the interactions between the way the content is recorded and the playback rig and environment and read other very long post, visit this thread: To crossfeed or not to crossfeed? That is the question...

    Question 2

    I find this question somehow harder.

    You already know mixing engineers place sound sources between speakers with level differences. Some may also use ITD in conjunction and that helps with a better rendering when you use binaural beamforming or crosstalk cancellation with loudspeakers playback.

    Recordings with coincident microphones direct to the final audio file incorporate early reflections and reverberation in a given venue spot.

    Instruments recorded with closed microphones in several tracks and then mixed with other two tracks or digital processing that incorporate early reflections and reverberation in a given venue spot may theoretically give a sense of depth with two loudspeakers playback. But what is the level ratio between those tracks?

    Can you mix them keeping the same ratio one would have with a binaural recording alone?

    But not treating early reflections and bass acoustic problems in the playback room may also null any intelligibility of such depth you could theoretically convey.

    I am curious to know how the processors that increase soundstage depth, that @bigshot mentioned here, work. An example would be the Illusonic Immersive Audio Processor, from Switzerland, a company that also provides the upmiximg algorithm in the Realiser A16, which extracts direct sound, early reflections and diffuse field in the original content and plays them separately over different loudspeakers:


    Anyway, without a binaural recording and crosstalk cancellation (or binaural beamforming), playing back plain vanilla stereo recordings, you could only hear an impression (and probably a wrong one) of elevation by chance, in other words, if some distortion downstream is similar to your very own pinna spectral filtering. With multichannel you can try the method @bigshot describes here.

    Going further in the sound science forum.

    The following threads are also illustrative of concepts involved in the questions raised:

    A) Accuracy is subjective (importance of how content is produced);
    B) How do we hear height in a recording with earphones (the role of spectral cues);
    C) Are binaural recordings higher quality than normal ones? (someone knowledgeable recommended books you want to read);
    D) About SQ (rooms and speakers variations)
    E) A layman multimedia guide to Immersive Sound for the technically minded (Immersive Audio and Holophony)
    F) The DSP Rolling & How-To Thread (excellent thread about DSP started and edited by @Strangelove424)

    To read more about the topics:

    P.s.: I cannot thank @pinnahertz enough for all the knowledge he shares!
     
    Last edited: May 6, 2018
    sonitus mirus and Arpiben like this.
  9. bigshot
    The problem with the word "distortion" is that the word has a pejorative connotation. But distortion just means that the original has been altered. An audio mix is *by definition* different than the original sound being recorded. The idea is to create a hyper spatial reality that is *better* than reality. A sound mixer makes blends of sounds clearer. He places them in an organized soundstage. He modifies EQ and distance cues to give the sound depth. All of this creates the intended sound which is quite different than the original sound. Listen to a single mike pointed at a performance and then listen to a professionally miked and mixed recording of that performance. It's apples and oranges and the non-distorted one isn't the one that sounds the best.
     
    jgazal likes this.
  10. Strangelove424
    What is the correct sound stage by definition? If it's the exact sound stage the engineer heard, I'd need his exact speakers, room, and wall treatment. Once I am willing to liberate myself from that artificial and quite possibly inferior ideal of sound stage, what then is correct? I can appreciate all the 'solutions' (or even the lack of 'solutions') for sound stage issues in headphones. The Smyth Realizer is expensive and complicated, but pretty neat for creating binaural cues w/ head movement. Dolby headphone is good at establishing more spatial dimension, but it lacks binaural interaction via head movement. Crossfeed can occasionally help with awkward levels of channel separation, but is not a panacea. And the people who like their music on the rocks are correct in their own way too, because they're listening for detail and tone, not spatial dimension, and value consistency in presentation above all. The problem is when somebody comes along and wants to claim they have the "correct" version. Everyone's been wrong. Finally they got it right. I dunno, that goads me a bit.
     
    Last edited: Nov 16, 2017
  11. SilverEars
    Interaction via head movement isn't important to me. Which Dolby headphone are you referring to? Or are you referring to Dolby Atmos software processing on Windows?
     
  12. ev13wt
    Simply put, its a DSP effect. If you like it, enjoy it. It is not the "original" recording anymore. It probably simply creates some hall effect or something.
    Using an EQ to EQ to "taste" would be the same thing.

    What I don't like is the phoolery on the website. But hey. We have power cables for 15000 - cut the guys some slack.


    If you want to play around with DSP, effects and such - simple get a DAW trial period software, add some free plugins and play with it all.
     
  13. Strangelove424
    Dolby headphone for stereo, DH1 (reference room) or DH2 (lively room), but personally don't use DH3 (large hall). Not from Windows, my sound card is a swiss army knife.
     
  14. bigshot
    Interaction with head movement is one of the primary reasons that speakers are better than headphones. It's a big part of how we perceive depth and distance. Unfortunately, synthesizing that with headphones is hideously expensive. It's cheaper to just do speakers and get the real thing. And DSPs work a lot better with multichannel rigs than they do two channel.
     
    Last edited: Nov 16, 2017
  15. 71 dB
    My definition of “spatial distortion” is the same Linkwitz used to have in the 70's [1]. Later he seems to have developped the definition to something I would call "soundstage distortion" [2]. Avoiding "soundstage distortion" in my opinion almost hopeless and extremely demanding. Avoiding spatial distortion is very easy in comparison, enough simple crossfeed will do.

    So, crossfeed limits excessive stereo separation with headphones, which is imo much much bigger problem then "soundstage distortion". Soundstage distortion means for example a guitarist plays a few feet too close or far and maybe 8 degrees too left compared to what was intented. How do I even know where the guitarist should be?



    [1] http://www.johncon.com/john/SSheadphoneAmp/
    [2] http://www.linkwitzlab.com/frontiers_6.htm
     
2 3
Next
 
Last

Share This Page