bigshot
Headphoneus Supremus
You just got more honesty than anyone else will probably give you, and what is your response? more self pitying tears.
@bigshot I didn't see your post until I'd already posted. I could go into recording/mixing a church organ in considerably more detail but that's off topic.
(...)
G
You just got more honesty than anyone else will probably give you, and what is your response? more self pitying tears.
[1] Why is it so important for you to prove that I know nothing? [2] 90° out of phase is what I mean by more information ...
[3] I stopped arguing.
[1] How was I supposed to know that you have to be a sound engineer to know this stuff? I got myself the degree to know this stuff, [2a] but I was clearly deceived.
Since my education, knowledge, understanding, attitude and working history have all been labeled as no good ...
[1] In my opinion classical music worked usually well. [2] I concluded that's because the spatial information is complex and "random", while "studio music" has problems, [2a] because the spatial effects are largely simpler such as amplitude panning, [2b] too aggressive to behave well with matrix decoding.
Do you think that my experience here improves self-esteem? It does not.
If you think is more appropriate to start another thread about recording and mixing please tell me and It will be an honor to kickstart the question.
[1] It really surprised me to find a 70s album that decoded perfectly. It makes me think that somewhere there's a discrete 5.1 mix of that album that they haven't released.
1. Huh? Kettle, pot, black! You are the one who started and then kept repeating that you are smarter and well educated and that sound engineers are "ignorant", "idiots", etc. All I'm doing is refuting this claim/nonsense and also refuting the claims/nonsense you've made up about scientific proof/justification that your crossfeed is better, more spatially accurate, etc. This is the science forum and debunking fake or pseudo science is one of the reasons this forum exists!
2. But that's nonsense, 90° out of phase is not more information, it's exactly the same information just 90° out of phase. The story doesn't end there though, because with ProLogic that 90° out of phase information is highly band limited (100Hz-7kHz if I remember correctly), so we've actually got less information, which is the exact opposite of your claim! This is a FACT of how Dolby's ProLogic actually works and I know this for 4 reasons: 1. That's what the Dolby engineers told me when they installed and calibrated their professional ProLogic encoder and decoder units in my studio! 2. It was stated in the manual. 3. If anyone knows how Dolby ProLogic works, surely it would be Dolby themselves? and 4. It's since been publicly published and I know of no reliable data/information which disputes this fact. This is the science forum, not the "I'm just going to make up any old nonsense and call everyone else ignorant" forum!
Hmmm...well, "a lot" might be a bit of an exaggeration. I might have a blind spot here, but the Hafler method of extracting 4 channels from 2 was a form of matrix "decoding" only. There was no encoder. Since there were two primary 4 channel matrix encoding systems, QS (Sansui) and SQ (Columbia/Sony), and were actually not fully compatible with each other. SQ used steering logic with enhanced adjacent channel separation, QS did not. The Hafler circuit matched neither, but managed to (cheaply!) extract some Lr and Rr information.After reading @bigshot post about how he engages dsp decoding to discover if a given content was encoded with matrixed surround, I am inclined to believe that content variability can actually be somehow beneficial for the industry.
I remember seeing enthusiasts searching for quadraphonic decoders and compatible content.
I was surprised to hear from @bigshot that Progressive rock and New Wave music used Haffler Doss Sum and Difference Matrix System a lot.
They'd have little idea of what is "compatible" other than by product labeling, but they may choose a preference for any number of reasons. Compatibility would require a known reference, which they won't have.Perhaps that kind of “mining” is itself a playful activity. Well, perhaps only for a tiny amount of enthusiasts...
So if hardware allow the consumers to optionally engage a set of different decoders, the enthusiasts will try to find the most compatible.
Sure, but where do draw the line in the combinations of recording and playback settings? The permutations boggle the mind! The main difference here is that ProLogic (music setting), DTS Neo, and the Hafler circuit are examples of sound enhancers. ProLogic (Cinema) is not an enhancer, it's a very precisely defined decoder of an encoded format. Cross-feed is none of that. It's an attempt at correction of a perceived problem that can neither be uniformly quantified, nor find universal desirability. It's not decoding anything, an it's not enhancing anything. It's specific without actually being designed to be specific. Still nothing wrong with experimenting with all of it, including all the others not mentioned. But there's clearly only a "right" setting if it's an actual decoder feeding the correct speaker plan.For intance one could try crosstalk cancellation, hafler decoding (as @Erik Garci mentioned), Dolby Pro Logic, DTS Neo etc.
Following that reasoning, I think the way you record and mix church organ is not that much off topic considering crossfeed, that is also a user setting for playback.
The only real "wrong" is the one that is stated to be absolutely right for everyone on all music, and is not a specific decoder. Otherwise, there's a wide range of preference.I know that your description may not allow to conclude which playback option setting is more suitable, but I think it is something very interesting to describe in the sound science forum.
If we, the consumers, adopt a wrong setting, I am sure you, the professionals, will be able to dissuade such path with sensible arguments.
Thus I would really like to read from you.
Just a few date corrections. Technically, the first 5.1 mix was Star Wars, 1977, the 70mm 6 track mag theatrical release in Dolby's "Baby Boom" format, where screen channels 2 and 4 became essentially LFE, leaving LCRS for full bandwidth. That format stuck, and many films were mixed that way, though release was only possible on 70mm 6-track magnetic. Digital sound on film began in 1991 with Batman Returns in 5.1, 35mm. The tracks are optical, and Dolby was joined shortly by Sony SDDS and DTS, the latter used only an optical time code to synch an external CDR with the actual track on it.1. That's not possible. It might have decoded in a way that suites your personal preferences, there might even be a somewhat more objective explanation due to coincidental/lucky phase relationships on the recording, your personal speaker setup/room might be a factor or it could be any combination of all/any of these factors. However, it can't be that there's some original 5.1 mix out there because there was no 5.1 in the 1970's. There was 6 channel sound much earlier (see the Todd-AO process), in fact 2001-Space Odyssey was originally mixed in this format but it was not 5.1, it was 5 front speaker channels and a surround (no LFE or split surrounds). The first 5.1 mix as we would recognise it was Apocalypse Now in 1979 but it was only possible using the 6 discrete channels available on 70mm film, there was no stereo matrixing involved (or possible). It's not until 1992 that 5.1 became possible with only two audio tracks and even then, only by using digital technology and the optical tracks on 35mm film. Notice that this is all about film sound, music studios did not have this technology and as far as I'm aware were not even allowed to have it! If there is a 5.1 master out there, it almost certainly dates from 2000 or later.
G
Great explanation. It gets even more interesting when you consider the Hafler PRIR. Basically you hear the sum from the center-front speaker (L+R to both ears), and you hear the differences from the center-back speaker (L-R to left ear, and R-L to right ear).
I made a Hafler PRIR where the center-back speaker was actually measured in front, but I turned my head in the opposite direction. I looked right instead of left and looked left instead of right. This way, the center-back speaker has the same spectral balance as the center-front speaker, and head-tracking helps me distinguish which sounds are from the front versus the back. Maybe Smyth can add a Hafler mode that works for any PRIR that has a center speaker or a closely-spaced pair.
Some live recordings sound great with crowd noise and hall reverb that come from the back. Recordings that were matrix-encoded sound great as well, and you might not realize which ones until you listen to them this way.
In addition, for 4.0 or 5.1 recordings, the effect can be flipped for the two discrete surround channels, Ls and Rs. Basically you hear the sum from the center-back speaker (Ls+Rs), and you hear the differences from the center-front speaker (Ls-Rs to left ear, and Rs-Ls to right ear).
2. My university didn't offer courses focusing on sound engineering. Had such courses existed, I would have most probably taken them. That's why I assumed everything sound engineers need to know is incorporated into existing courses.2. No, you did not get yourself a degree to know this stuff! By your own admission you got a degree in electrical engineering and acoustics, which is NOT "this stuff". "This stuff" is created by artists/sound engineers, NOT by electrical engineers and acousticians and the defining feature of "this stuff" is art, NOT electrical engineering or acoustics. If you really wanted to know about "this stuff" then you took the wrong course!
2a. No, you've done that all by yourself! You took a degree in electrical engineering and acoustics and (presumably) you got exactly what was promised? The mistake you've made (and continue to make despite it being explained to you repeatedly!) is in believing that commercial music recordings are defined by the science of acoustics, when in fact they are not and have not been for many decades. But of course you don't know any of this, you only know electrical engineering, acoustics and what you perceive, you DON'T know the facts, development or history of music recording because that's NOT what you studied at university!! Worse still, much of what you're stating/claiming has nothing to do with either the course you studied or sound engineering, you've just made it up! This nonsense about 90° out of phase having more information is just one of many examples which you would NOT have been taught at university regardless of which course you took!
You just keep repeating the same nonsense over and over again. No one, NO ONE (!) has stated or is stating your education and knowledge is "no good", we're stating that it's just inapplicable in this case (the case of commercial music, film & TV sound). Your "understanding" is therefore highly flawed because you are applying your education and knowledge incorrectly/inappropriately. And again, you go much further than this because what you call your "education" and "science" are quite often nothing of the sort, it's just stuff you made up which seems to correlate with your perception but is actually contrary to the facts and science!
1. Yes, my opinion and my conclusion. I don't say it's scientific.Two points with this quote:
1. You clearly state "My opinion" and "I concluded". So it's YOUR opinion and YOUR conclusion but it's NOT science's opinion and conclusion or what you were taught at university! This point is blatantly obvious, so why can't you see it? Why do you repeatedly keep representing your opinion and your conclusion as facts, science and taught to you at uni?
2. Your conclusion contradicts the actual facts! You refuse to accept this though because you're basing your conclusion on what you know about acoustics, while completely ignoring or misunderstanding how classical and "studio music" recordings are actually created. For example:
2a. This is so incorrect, it's actually the exact opposite of the facts! The spatial effects are actually much more complex in "studio music" (non-acoustic genres). Acoustic genre recordings, such as traditional classical music, have the "complex and random" spatial information of a single acoustic venue, a concert hall for example. Studio music on the other hand has the "complex and random" spatial information of several different acoustic venues, plus amplitude panning, plus the "complex and random" spatial information added with each artificial reverb effect (of which there can be several different ones), plus numerous other time based effects which directly or indirectly cause spatial information.
2b. No, you clearly do not understand the practicalities of working with ProLogic and matrix encoding/decoding, of the vector "snapping" consequences of the technology but this isn't the thread to go into such details.
G
Edit:
This forum does not exist to improve your self-esteem,
Yeah, 5.1 is more marketable than 5.006.The term "5.1" was first suggested at a SMPTE conference regarding multichannel audio. The ".1" is technically incorrect, but was suggested in the context of "creative rounding for marketing".
That's not possible. It might have decoded in a way that suites your personal preferences, there might even be a somewhat more objective explanation due to coincidental/lucky phase relationships on the recording, your personal speaker setup/room might be a factor or it could be any combination of all/any of these factors. However, it can't be that there's some original 5.1 mix out there because there was no 5.1 in the 1970's.
[1] Just a few date corrections. Technically, the first 5.1 mix was Star Wars, 1977, the 70mm 6 track mag theatrical release in Dolby's "Baby Boom" format, where screen channels 2 and 4 became essentially LFE, leaving LCRS for full bandwidth. [2] That format stuck, and many films were mixed that way, though release was only possible on 70mm 6-track magnetic. [3] Digital sound on film began in 1991 with Batman Returns in 5.1, 35mm. The tracks are optical, and Dolby was joined shortly by [4] Sony SDDS and DTS, the latter used only an optical time code to synch an external CDR with the actual track on it.
[5] 5.1 music was actually available to the consumer in 1997 in DTS audio format ...
2. My university didn't offer courses focusing on sound engineering. Had such courses existed, I would have most probably taken them. That's why I assumed everything sound engineers need to know is incorporated into existing courses.
2a. So, music recording HAS been defined by the science of acoustics in the distant past? What happened?
[3] This information comes to me as a shock ...
1. Yes, my opinion and my conclusion. I don't say it's scientific.
2. Isnt' it funny how all my conclusions contradict facts? According to you anyway. How do I do it? A person who knows nothing contradicts facts statistically 50 % of the time.
2a. By complexity I mean the nature of the effects. Natural acoustics can't created sharp patterns the way a studio effect plugin can. [2aa] Amplitude panoration is "simplicity" which is there despite of complexity elsewhere, but I suppose all this is wrong, because I didn't take the right courses.
2b. Yeah, maybe not. I just use them to listen to music.