Why 24 bit audio and anything over 48k is not only worthless, but bad for music.
Oct 25, 2017 at 4:14 PM Post #2,311 of 3,525
1. Better at soundstage, particularly for vector reproduction. Better for articulation.
2. This is about inferring complex sound, not sine waves. [2b] If you got any scientific papers on complex sound, please share.
[3] Until people got any form of proof for vector sound and complex issues on timing, by not using sine waves...

1. Then no, more bits and/or higher sample rate make absolutely no difference whatsoever.
2. Huh? Complex sound is sine waves! What do you think "complex sound" is, if it's made of something other than sine waves?
2b. Sure, how about starting with the paper which mathematically proved the sampling theory and enabled digital audio to exist in the first place: "A Mathematical theory of Communication" - Claude Shannon, 1948.
3. No idea what you mean by "vector sound" but what do you suggest we use instead of sine waves? Before you answer, bare in mind that speakers and headphones can only reproduce sine waves and human ears only respond to sine waves.

G
 
Oct 25, 2017 at 4:23 PM Post #2,312 of 3,525
Yeah... maybe he assumes that bigger numbers means better channel separation of something.

Temporal resolution? We have already established that temporal resolution of CD is practically infinite.

HD video is better than SD video. 4K video is better than 2K (on large screens). 8K video is better than 4K video (on VERY large screens). So, maybe 24/96 is better than 16/44.1? No, because 16/44.1 already is what 8K video is for pictures. 16K video isn't better than 8K video, because you have to be a hawk to see the difference.
 
Oct 25, 2017 at 4:47 PM Post #2,313 of 3,525


1) Soundstage and sound location is a function of the mix and the acoustics between the transducers and the listener, not the recording
medium. Usually when people say "better soundstage" they mean "better expectation bias". Articulation would be covered under distortion. All digital formats have inaudible levels of distortion.

2) All audible sounds, both simple and complex are *perfectly* reconstructed with 16/44.1 and can be represented as sine waves. All of them. Again, not being able to accurately reconstruct sign waves would fall under the category of distortion. See #1.

See how interesting it is to hang out in Sound Science! You learn something new every day!

Well, if you did know the understanding of soundstage and imaging, then you would know that it is the placement of instruments in space, as experienced by the listener. As a term in a terminology, it belongs in the subjective field of science.

As for what you are speaking of, which probably is supposed to be the physical reproduction of soundwaves, there is no soundstage. At best the reproduction is very limited, there are some phase shifting, but given the nature of humans, that shift cannot be static. Nor can the amplitude difference be. And that is before the acoustics you speak of.

And no, I did not learn anything from what you wrote, as I was told this in the 80s. I find the physics of it intriguing, and the lack of the physics being reflected by real world gear, even more interesting. But as they say in the military, if the landscape does not fit the map, there is something wrong with the landscape.

I am not sure what the right landscape is.

You're going to need to define 'vector sound'. Also, the sine-wavy aspect of all this is due to Fourier analysis, and I task you to get a voltage function out of a mic that doesn't have a Fourier decomposition...

The very basic physics of hearing, is the theory of phase shift and amplitude shift between the ears, as to be able to position the origin of sounds. To achieve this for humans, the individuals distance between the ears, is a minimum to consider. So if a sound source is 10m away, 10 deg up, and 46 deg to the left, that will result in specific phase shift and amplitude shift, that is unique for the individual. (well, not exactly unique, given these parameters, but not even close to equal for all humans.) In vector sound reproduction, the phase and amplitude shift is calculated to simulate the physics of hearing, for the individual. If using a gyroscope, and done on the fly, the experienced source will be a fixed position in space. The calculation is done by distance vectors.

Also, movement plays a role, as artifacts moving towards you, at a certain speed, actually get a phase distortion, as to change in wavelength due to the movement. Just like a car coming at you, or moving away from you. Again, this can be done by vector calculation.

There is a whole hosts of things that can be added to the reproduction. Sometime in a not so distant future, someone will introduce vector sound. Hopefully, since I speak of it in public, they cannot patent it. They cannot patent it for headsets, nor automatic distance calculation between the cans using ultrasound, or any sort of waves. Because that is given in the public domain. The use of gyroscope or any type of device, to register head movement, to assist for vector reproduction, well, it is in the public domain now. It will happen. Particularly since the only real change in the industry, is a shift to mono recordings of individual sounds, while the rest of the infrastructure only needs minor adjustments.

AMD has an API for vector sound, but it only includes amplitude shifts. It has no reading of listeners dynamics at all, as in distance between the ears.

This sound tech, used in combination of see trough VR, combined with great positioning, makes my head spin with ideas. Not just for music. Particularly augmented reality. Why there is no rush in the industry to be the first at this tech, shows a complete lack of visionaries.

Given the insane variance using 16/44.1 for classic stereo, that variance indicates, to me, that we probably need more for vector sound. If we don't, that is great news, as vector sound will arrive earlier then.

This also gives the reason for these in dummy head recordings do not work. In general. They are close to work, if you got the exact right head dimensions, something which is forgotten by the fans. And as with most things fans of this type of recordings, particularly those with heads that fit a certain recording, they do not listen to the facts presented.

As for your dragging Fourier into this, I miss your point. I have lived long enough, to have seen an ellipse being represented by multiple circles. Sure, if that is mathematically possible, then it is. Trouble is, that when things are really understood, or so we thought, the movement of the planets had nothing to do with circles. The answer was of a completely different nature. As is the complexity of sounds. Well, again, this is not really understood at all, and maybe the placement of sounds is derived by some Fourier like process by humans. We just do not know.

What we do know, is that music or sound reproduction is far more complex than a single sine wave. As is having a dog, cat, or hamster, but mixing the three?
 
Oct 25, 2017 at 5:43 PM Post #2,315 of 3,525
Well, if you did know the understanding of soundstage and imaging, then you would know that it is the placement of instruments in space, as experienced by the listener. As a term in a terminology, it belongs in the subjective field of science.

Designing the soundstage in the mix is a creative/subjective process, but reproducing soundstage in a home stereo is basically a technical matter of applied acoustics. Soundstage is the illusion of an aural plane in front of the listener representing the source of the music. It's governed by the radius of the triangle between the two speakers and the listener, by the distance between the listener and the speakers, and by the balance of each sound element in the mix between the two speakers. It's also helped along in the mix by secondary distance cues, like reflected echoes off walls or mic distance, which create the illusion of depth. That stuff is baked into the recording though. Incremental improvements in playback quality can't affect secondary distance cues.

Weren't you saying that bitrates and sampling rates above 16/44.1 improved the soundstage? I don't know how sampling rate or bit rate could possibly affect that. Soundstage is improved by proper speaker placement and better room acoustics, not a lower noise floor and super audible frequency content.

As for what you are speaking of, which probably is supposed to be the physical reproduction of soundwaves, there is no soundstage. At best the reproduction is very limited, there are some phase shifting, but given the nature of humans, that shift cannot be static. Nor can the amplitude difference be. And that is before the acoustics you speak of.

I'm not sure what you're talking about here I'm afraid. Are you trying to say that the channel separation affects the placement of sound elements within the soundstage? It does sort of. A lot of channel overlap would degrade the soundstage. But 16/44.1 and 24/96 both have the same basic specs for channel separation- perfect channel separation to human ears. Are you talking about the secondary distance cues baked into the mix? Because all of the subtle distance cues within the audible range would be perfectly rendered in 16/44.1. More bit depth wouldn't improve upon that because the time factor and the frequency factor are related here. Your point is a little vague. Maybe you could organize your thoughts and present them more clearly.
 
Last edited:
Oct 25, 2017 at 6:30 PM Post #2,316 of 3,525
That video is worthless because it's LOSSY!

I gave the video a go, could barely hear it a -78db. With a ton of noise. (I probably play too loud as well)

People need to wake up. If you do not get the meaning of the argument, you need to do some recording on your own. A SN of 78db is pretty darn good. For most recording equipment, this is achieved by using post process noise reduction. Also, people need to do their own recordings using both lossy and compressed formats.

The physics and math in this case, simply means that there is not enough resolution by the mic, to out resolve 16bit. Few does. At least not as how this is currently being understood. There is nothing preventing a ADC to work better at a higher resolution, resulting in a better result over all, but that is not given the high res nature of it, on its own. You could argue that multiple readings increase accuracy, or that higher resolution result in the inaccuracy to fall outside of the wanted accuracy: Which is common practice for physics. But if there is only a range of say 90db to begin with, that range is perfectly possible to represent by 16bit, without any loss.

As for reproduction, it may be perfectly probable, that a higher bit dept could end up with a more accurate reproduction after the DAC, but that is a messy topic, related to the rendering improving due to complex interaction of components, not necessarily the quality of the material itself. Also, if will effect how noise is rendered, and the need for filtering.

Designing the soundstage in the mix is a creative/subjective process, but reproducing soundstage in a home stereo is basically a technical matter of applied acoustics. Soundstage is the illusion of an aural plane in front of the listener representing the source of the music. It's governed by the radius of the triangle between the two speakers and the listener, by the distance between the listener and the speakers, and by the balance of each sound element in the mix between the two speakers. It's also helped along in the mix by secondary distance cues, like reflected echoes off walls or mic distance, which create the illusion of depth. That stuff is baked into the recording though. Incremental improvements in playback quality can't affect secondary distance cues.

Weren't you saying that bitrates and sampling rates above 16/44.1 improved the soundstage? I don't know how sampling rate or bit rate could possibly affect that. Soundstage is improved by proper speaker placement and better room acoustics, not a lower noise floor and super audible frequency content.

I'm not sure what you're talking about here I'm afraid. Maybe you could organize your thoughts and present them more clearly.

I speak of headsets only. I also speak of reproducing placement of sound, by using the physics of sound. Once you leave the speaker, and reread what I wrote for head-fi, then you should be able to grasp it.

Also, when speaking of the "illusion", that is the experience. People need to read up on how the physics work for that. Why we got two ears. And why two ears is limiting. In the wild, you will see animals turning their heads and even their ears, as to optimize the angle of the ears. The accuracy differs greatly, as a function of angle to the plane made by the ears.

Again, if you at say 45 deg to the left, 10 deg up, at a distance of 10 meter, the sound from that source will hit your ears differently. There is an inverse square law of energy and waves by distance. Time to travel a distance is linear. If you know the distance from the source, to each ear, you may calculate the amplitude difference and the phase shift, between the ears. This is the very basics of sound science and human hearing. The utter basics.

Vectors may be used to calculate the distance from the artifact, to each ear. There will be a distance difference, unless the sound source is right in front or behind the listener, which results in an equal distance.

Phase shifts are then calculated for each ear, using its linear nature. Again, the sound will hit the ears at a very slight shift in time, as there is a difference in distance to travel to each of the ears.

Amplitude difference are then calculated for each ear, using its inverse square law. As for phase shift, there will be a minute difference.

The listener then uses these two properties, to calculate the origin of the sound. At least, that is the current consensus. It has been that, for decades now. It is not like we understand what is going on, inside the ear or the brain, at least not to my knowledge. If anyone knows of good research on the topic, please share.

Why this is lost in the community, I simply do not know. People seem to think there is height to sound reproduction of speakers, but no. There isn't. I have yet to hear much opposition, when people claim to experience height. Also, there is no height from headphones, not for amplitude modulated soundstages. When I talk about the plane is different in height for say most 3-way speakers, and that I find it irritating when for instance a guitar shifts up and down in the physically soundstage due to the tone being played, that is actually exactly what I am supposed to experience. Some speakers solves this by doubling up on their 2nd and 3rd phases, but the physics of that, seems lost on people.

Sure enough, I just got berated by someone claiming me to know nothing, yet displaying no insight into basic physics of hearing and sound. Are people going to stay offensive and stupid, or is this forum ready to discuss the tech that is doomed to arrive soon? It will be based on this very physics, that is well known since at least the 70s.
 
Oct 25, 2017 at 7:01 PM Post #2,317 of 3,525
I speak of headsets only. I also speak of reproducing placement of sound, by using the physics of sound.

In the wild, you will see animals turning their heads and even their ears, as to optimize the angle of the ears.

Ah. I see. The problem is, you can't turn your head to pinpoint locate sound with headphones. They don't have true soundstage. The shifts in time and phase have to be synchronized with the head movements. You can simulate it using something like a Smyth Realizer, but I don't have one of those myself. By themselves, headphones can only arrange the sound in a straight line through the center of the head. If you turn your head one way or the other, it's still the same straight line through your head. If you want, you can call that "headstage" because it's all right through the middle of your skull.

Speaker systems are different. The sound is physically in front of you because the speakers are physically arranged in front of you to the left and right. If you turn your head, you can locate the sound in physical space in front of you. That is true soundstage. Vector location is only possible if you can turn your head, and you can't do that with headphones that are attached to your noggin.

Neither of these things would be any different with 16/44.1 as opposed to higher bit depths. I thought that was what you were claiming, Maybe I read wrong.

As for speakers creating height, there are a couple of ways to accomplish that. The easiest way to do it in a two channel system is to simply add a second set of speakers at a higher level. The dispersion pattern of the speaker can affect the perception of height too. A horn loaded speaker with highly directional sound will sound narrower and more in a straight line than a speaker that disperses the sound in a wide radiated pattern.

In a multichannel speaker system, you can raise the center channel a little higher than the mains to raise the height of the soundstage. There are also DSPs that use the information about the speaker placement you enter in your AVR to calculate subtle time offsets in the channels to give the illusion that the soundstage is deeper and higher. The ultimate way to create height is to use Dolby Atmos, which adds a set of speakers at roof level to mesh with the 5.1 speakers. That allows sound to be placed precisely in a three dimensional sound field. A sound field is to soundstage as soundstage is to headstage. It's a progression from a straight line to a dimensional plane to a three dimensional space.

Again, none of this would be any different with 16/44.1 as opposed to higher bit depths.

We're not berating you by the way. We're just trying to figure out your terminology. It's different than the terms we normally use for these things. It may be a language difference. We'll all figure it out.
 
Last edited:
Oct 25, 2017 at 8:23 PM Post #2,318 of 3,525
The very basic physics of hearing, is the theory of phase shift and amplitude shift between the ears, as to be able to position the origin of sounds. To achieve this for humans, the individuals distance between the ears, is a minimum to consider. So if a sound source is 10m away, 10 deg up, and 46 deg to the left, that will result in specific phase shift and amplitude shift, that is unique for the individual. (well, not exactly unique, given these parameters, but not even close to equal for all humans.) In vector sound reproduction, the phase and amplitude shift is calculated to simulate the physics of hearing, for the individual. If using a gyroscope, and done on the fly, the experienced source will be a fixed position in space. The calculation is done by distance vectors.

Also, movement plays a role, as artifacts moving towards you, at a certain speed, actually get a phase distortion, as to change in wavelength due to the movement. Just like a car coming at you, or moving away from you. Again, this can be done by vector calculation.

There is a whole hosts of things that can be added to the reproduction. Sometime in a not so distant future, someone will introduce vector sound. Hopefully, since I speak of it in public, they cannot patent it. They cannot patent it for headsets, nor automatic distance calculation between the cans using ultrasound, or any sort of waves. Because that is given in the public domain. The use of gyroscope or any type of device, to register head movement, to assist for vector reproduction, well, it is in the public domain now. It will happen. Particularly since the only real change in the industry, is a shift to mono recordings of individual sounds, while the rest of the infrastructure only needs minor adjustments.

AMD has an API for vector sound, but it only includes amplitude shifts. It has no reading of listeners dynamics at all, as in distance between the ears.

This sound tech, used in combination of see trough VR, combined with great positioning, makes my head spin with ideas. Not just for music. Particularly augmented reality. Why there is no rush in the industry to be the first at this tech, shows a complete lack of visionaries.

Given the insane variance using 16/44.1 for classic stereo, that variance indicates, to me, that we probably need more for vector sound. If we don't, that is great news, as vector sound will arrive earlier then.

This also gives the reason for these in dummy head recordings do not work. In general. They are close to work, if you got the exact right head dimensions, something which is forgotten by the fans. And as with most things fans of this type of recordings, particularly those with heads that fit a certain recording, they do not listen to the facts presented.

As for your dragging Fourier into this, I miss your point. I have lived long enough, to have seen an ellipse being represented by multiple circles. Sure, if that is mathematically possible, then it is. Trouble is, that when things are really understood, or so we thought, the movement of the planets had nothing to do with circles. The answer was of a completely different nature. As is the complexity of sounds. Well, again, this is not really understood at all, and maybe the placement of sounds is derived by some Fourier like process by humans. We just do not know.

What we do know, is that music or sound reproduction is far more complex than a single sine wave. As is having a dog, cat, or hamster, but mixing the three?

So wait, you say phase and amplitude but don't know why Fourier matters? Just forget it then! Also, absolutely zero of the books/monographs I've read on positional audio / virtualization bother mentioning hi-res; take from that what you will.
 
Oct 25, 2017 at 8:39 PM Post #2,319 of 3,525
Ah. I see. The problem is, you can't turn your head to pinpoint locate sound with headphones. They don't have true soundstage. The shifts in time and phase have to be synchronized with the head movements. You can simulate it using something like a Smyth Realizer, but I don't have one of those myself. By themselves, headphones can only arrange the sound in a straight line through the center of the head. If you turn your head one way or the other, it's still the same straight line through your head. If you want, you can call that "headstage" because it's all right through the middle of your skull. ...

Great. Now you are getting it. Yes, for classic stereo reproduction using speakers there is some form of soundstage. And no, there is hardly any soundstage as by physical space, outside the head of the listener, when using headsets. Not for regular recordings.

What I am arguing, is simply record every instrument in mono, with as little room acoustics as possible, and simply place the instrument or artist on the fly, or by pre calculation by the dimensions of the head of the listener. And, yes, finally, someone who gets the point, that you would have to do this on the fly, as to correct for head movement, if the listener is to experience the sound source as fixed in physically space. That is perfectly possible, if using vectors and math, and a sensor in the headset for head movements.

As for 16/44.1 I do not know. Not for this application. Since I have never tried it, as to my knowledge, it does not exists. The results for the Smyth Realizer is very promising, as they do some of the math I describe here.

As for reproduction of height for speakers: There is no information of height in the recording. Usually, base is at the bottom, as most drums and low frequency sources are placed low, and so on. But the plane is fractured by the placement of the speaker elements. There is no height in stereo and speakers. Not is you are true to your senses. Sure, some people imagine there is, and good on them. I am a different character, and try to stay true to my senses. I want vector music, not the last century tech of today.

For speakers or stereo, there is a known technique of lowering voices as in volume, that are supposed to be in the background, but that is only relational to louder voices. The reproduction is still in the same plane, if you listen carefully. The amount of false experiences among listeners are like crazy. One thing is to be able to imagine something, which might be easier with some gear than others, another is to claim to be able to hear height being reproduced, when there is no information about height at all, that may possibly be presented by the gear in question. Again, I like your vocabulary.

Going vector on music, it is really simple to render for speakers. Actually, having tall speakers, or a four speaker setup with one set low and one set high, height could be reproduced. On the fly.

It is much like the structure and formatting division of html and css. A multi speaker setup for like a home theater would only need a special rendering, applying math for that particular type of setup. It would just place and emulate the sound based on the medium.
 
Oct 25, 2017 at 9:11 PM Post #2,320 of 3,525
So far, I like the way 96 kilohertz 24-bit sounds. Another possibility with these higher bitrate recordings is that more care is taken mastering them back into digital format than the CD creators.
24 bit recording equipment didn't appear on the market till around 1997, so if it was recorded before then it is unlikely to have greater than 16 bits of resolution.
 
Oct 25, 2017 at 9:26 PM Post #2,321 of 3,525
That video is worthless because it's LOSSY!
Sheesh, there's no pleasing them :) I don't think it makes much difference, the signal is there if you crank up the volume or download the audio and amplify it in some audio editor. If you don't hear it at your normal listening volume level then it is not because youtube's compression somehow removed it. But anyway, know my good heart, I generated it again, so here is flac and here is mkv with that flac as audio track. Enjoy.

(To be clear, I don't delude myself that this will satisfy "audiophiles". I'm waiting to hear what is wrong with it now :) )

I gave the video a go, could barely hear it a -78db. With a ton of noise. (I probably play too loud as well)
What noise?
In a quiet room, with headphones (HD 650) and volume higher than my comfortable listening level (2 o'clock on O2+ODAC) I also stop hearing anything at around -72, -78 dBFS, but I don't hear any noise. After that, if I crank up the volume to max and turn on O2's gain (3.3x) I stop hearing it after -96 dBFS, and still no noise.
 
Oct 25, 2017 at 9:28 PM Post #2,322 of 3,525
What I am arguing, is simply record every instrument in mono, with as little room acoustics as possible, and simply place the instrument or artist on the fly, or by pre calculation by the dimensions of the head of the listener.


How many musicians can you fit inside your head?!

That's an interesting idea, but it would require something like the Smyth Realizer with head tracking along with a multichannel master
with a channel for every musician. (You would need stereo for each musician, not mono because some electronic instruments output stereo.) I get the same basic thing with my 5.1 speaker system and an SACD. I guess if you had to use headphones, not speakers that would be nice though.

By the way, there are tricks for broadening and raising soundstage through speaker placement. You can create depth too. It's not a standard setup like you see on home theater sites, but it works. Basically, you run two sets of mains with one set ahead and wider than the other set. Then you set the center channel along with the back set of mains at a higher height. The result is a fairly life scale soundstage for both orchestral music and jazz combos.
 
Oct 25, 2017 at 10:23 PM Post #2,323 of 3,525
I gave the video a go, could barely hear it a -78db. With a ton of noise. (I probably play too loud as well)

People need to wake up. If you do not get the meaning of the argument, you need to do some recording on your own. A SN of 78db is pretty darn good. For most recording equipment, this is achieved by using post process noise reduction.
Huh?
The physics and math in this case, simply means that there is not enough resolution by the mic, to out resolve 16bit. Few does. At least not as how this is currently being understood.
Hmmm…well, I looked up an old favorite mic of mine, the Shure SM81, it's about a 40 year old design. Total DR is 118dB, so just shy of 20 bits. There are many more mics with even wider DR. Not sure what you mean.
There is nothing preventing a ADC to work better at a higher resolution, resulting in a better result over all, but that is not given the high res nature of it, on its own. You could argue that multiple readings increase accuracy, or that higher resolution result in the inaccuracy to fall outside of the wanted accuracy: Which is common practice for physics. But if there is only a range of say 90db to begin with, that range is perfectly possible to represent by 16bit, without any loss.
Yes, that's true for the final release, but in production there is a point to working at higher bit depth, especially when mixing and performing other DSP functions, which is why today's DAW internal processing is 64 bit floating point.
As for reproduction, it may be perfectly probable, that a higher bit dept could end up with a more accurate reproduction after the DAC, but that is a messy topic, related to the rendering improving due to complex interaction of components, not necessarily the quality of the material itself. Also, if will effect how noise is rendered, and the need for filtering.



I speak of headsets only. I also speak of reproducing placement of sound, by using the physics of sound. Once you leave the speaker, and reread what I wrote for head-fi, then you should be able to grasp it.

Also, when speaking of the "illusion", that is the experience. People need to read up on how the physics work for that. Why we got two ears. And why two ears is limiting. In the wild, you will see animals turning their heads and even their ears, as to optimize the angle of the ears. The accuracy differs greatly, as a function of angle to the plane made by the ears.

Again, if you at say 45 deg to the left, 10 deg up, at a distance of 10 meter, the sound from that source will hit your ears differently. There is an inverse square law of energy and waves by distance. Time to travel a distance is linear. If you know the distance from the source, to each ear, you may calculate the amplitude difference and the phase shift, between the ears. This is the very basics of sound science and human hearing. The utter basics.

Vectors may be used to calculate the distance from the artifact, to each ear. There will be a distance difference, unless the sound source is right in front or behind the listener, which results in an equal distance.

Phase shifts are then calculated for each ear, using its linear nature. Again, the sound will hit the ears at a very slight shift in time, as there is a difference in distance to travel to each of the ears.

Amplitude difference are then calculated for each ear, using its inverse square law. As for phase shift, there will be a minute difference.

The listener then uses these two properties, to calculate the origin of the sound. At least, that is the current consensus. It has been that, for decades now. It is not like we understand what is going on, inside the ear or the brain, at least not to my knowledge. If anyone knows of good research on the topic, please share.
"Spatial Hearing: The Psychophysics of Human Sound Localization"
by Jens Blauert and John S. Allen

The above would be a good start.
Why this is lost in the community, I simply do not know. People seem to think there is height to sound reproduction of speakers, but no. There isn't. I have yet to hear much opposition, when people claim to experience height. Also, there is no height from headphones, not for amplitude modulated soundstages. When I talk about the plane is different in height for say most 3-way speakers, and that I find it irritating when for instance a guitar shifts up and down in the physically soundstage due to the tone being played, that is actually exactly what I am supposed to experience. Some speakers solves this by doubling up on their 2nd and 3rd phases, but the physics of that, seems lost on people.
If you were taking only about traditional stereo recordings, I'd probably agree mostly, though there can be a bit of "accidental" height effect. However, there are ways to get height out of two-speaker stereo (and it's hardly anything new):
http://www.audiocheck.net/audiotests_ledr.php
Sure enough, I just got berated by someone claiming me to know nothing, yet displaying no insight into basic physics of hearing and sound. Are people going to stay offensive and stupid, or is this forum ready to discuss the tech that is doomed to arrive soon? It will be based on this very physics, that is well known since at least the 70s.
It's never really right to assume every one reading or posting in a public forum share the same ideas or attitudes. It's really just a bunch of individuals.
 
Oct 25, 2017 at 10:26 PM Post #2,324 of 3,525
24 bit recording equipment didn't appear on the market till around 1997, so if it was recorded before then it is unlikely to have greater than 16 bits of resolution.

Lately I have been doing new classical and smooth jazz when I am not in a Lamb of God, Hellyeah, Mudvayne, heavy metal mood.
 
Oct 25, 2017 at 10:37 PM Post #2,325 of 3,525
What you are calling vector based recordings. Is pretty much Dolby Atmos object based mixing. Up to 128 tracks (objects) are placed in the mix, the system records the metadata of the placement. When this is played back in the calibrated theater the system will place the objects based on the layout of sound system to reconstruct the mix as it was mixed at the mix theater. The mix processor is over 24 bits likely 64 bit. In practice even mixing together 16 bit audio would only be slightly noisier than the noisiest track. It extremely difficult to record a sound with a 96dB signal noise ratio. Most places on the planet are noisier then that. ISS might work.
 

Users who are viewing this thread

Back
Top