To crossfeed or not to crossfeed? That is the question...
Feb 14, 2019 at 1:12 PM Post #781 of 2,146
Unlike CD-players, DACs do not require CDs (outdated, hard to copy and expensive source of sound) and they can play hi-rez music.

I have an Oppo blu-ray player that can do all that. And I have an AVR with a built in DAC too. Both of those have DSPs. I suppose if you don't want to knock your high data rate files down to a normal file size a phone player won't work.

I prefer to feed my VST plugins with 24-bit resolution files than 16-bit files as it increases the quality of processing.

It processes at different bit rates depending on the file? That's interesting. I think all the DSPs I work with process at a fixed level (I'm guessing 16/44.1 PCM) and output at that level. How can you check to see that the file being played isn't being transcoded up or down when you use processing? Does it output a processed file you can look at? What kind of processing are you doing that requires a high rate? Are you messing with timing stuff?
 
Last edited:
Feb 14, 2019 at 1:54 PM Post #782 of 2,146
won't a VST use whatever bit depth the player/DAW/... is using, so most likely 32 or 64bit?
 
Feb 14, 2019 at 4:27 PM Post #783 of 2,146
1. Thanks for proving my point, every single one of the points in your response are just repeats of the exact same points you made over a year (and 30 odd pages) ago.

2. Points which have already been addressed and refuted but here you are just repeating the same points again and insulting everyone how doesn't share your ignorance of the facts and your preferences! I'm not going to refute every one of your points because it's already been done in this thread, I'll just address your main point of ignorance upon which you base everything:

3. There is nothing "natural" about the creation or end result of commercial music recordings and there hasn't been for many decades. Music mixing/production is an ART, it has been for nearly 60 years, it is ABSOLUTELY NOT about avoiding what would not occur naturally, it's ALL about creating products that fulfil an artistic goal and that consumers will hopefully like/enjoy, COMPLETELY REGARDLESS of whether it's "natural" or not! In fact, in almost all cases "it is about" the exact opposite, creating spatial information which is not "natural"! You do NOT get to dictate how a "recording should be created" or dictate what should be avoided!!
4. EXACTLY, this is the heart of your ignorance! Firstly, it is YOUR OPINION and secondly, it's an opinion based on a falsehood! It's a falsehood because no one ignores "excessive spatiality". I've never seen or heard of a studio that didn't have headphones or of a mix being created without being checked on headphones (by the engineers, producer and artists) and this is especially true since the 1980's as more and more consumers listened to music using headphones. Your opinion about what constitutes "excessive" (spatiality) is just that, your opinion. Certainly HP listening presents a wider/more separated stereo image but it is YOUR OPINION of whether that is "excessive" or not.
5. There are many cases where that "excessive spatiality" is the desired intention of the producer and artists (and in fact tools were often used to widen the image on speakers and make it more like HP listening, "shuffling" for example). The result is not "natural", it is not intended to be natural and you applying crossfeed is both lower fidelity AND going against the artists' intentions! Of course, if you like/prefer crossfeed and want to change/ignore the artistic intentions that's entirely your choice but it's NOT your choice to dictate that artists must follow what is "natural" and your personal definition of "excessive" and it's certainly NOT your choice to call others ignoramuses, especially as you're the one apparently completely ignorant of the fundamental basics/goals of music production!
6. If "people" really were "spatially informed" they would realise there is nothing spatially "natural" in the first place, that "spatiality" in music mixes does not only rely on acoustic replay crossfeed, that crossfeed therefore can do more harm than good and does not by itself emulate acoustic speaker crossfeed/reproduction anyway. You don't appear to know or understand any of this, so clearly you are NOT one of those "spatially informed people"!
7. Your ignorance of the facts and dogged belief that your opinion/preference should be shared by everyone, including the artists themselves, results in you defending your belief with a barrage of false statements and complete nonsense, for example:
8. Clearly. In fact you actually seem proud of being an ignoramus!
9. That's a completely false statement, they're always talked about, on every single music mix!
10. Music production is an ART, consequently there are NO "spatially illegal sounds"! Pretty much all music productions for many decades are not spatially "natural", therefore if that's your definition of "spatially illegal" then pretty much all commercial music recordings are "spatially illegal" all the time, regardless of whether they're reproduced on speakers or headphones!
11. Again, virtually no commercial music productions only employ a single stereophonic signal pair and therefore virtually all music mixes are "illegal" to start with.
12. Nope, you've just made that up and clearly it's complete nonsense! How do speakers know what spatial "illegalities" there are in any particular music mix and even if they did, how would they correct and make them "legal".
13. The "illegalities" "reach our ears" with headphones AND speakers, though they present those "illegalities" differently.
14. No it does NOT! It just presents those "illegalities" differently again. So now we've got 3 different presentations of the "spatial illegality", none of which are "spatially legal"! Which one or ones a consumer prefers it up to them but personally I prefer to go with the fidelity of what the artists actually put on the recording.
15. That seems to be the heart of your problem, you had a "powerful moment" when you realised something. What you realised is in fact actually false (crossfeed does NOT "fix this"), it's nothing more than a personal preference but unfortunately, because for you it was a "powerful moment", you've spent an inordinate amount of time trying to turn it into something more than just a personal preference. In your mind you've (falsely) turned it into an objective fact, which you then try to force on everyone else on the (false) basis that it is an objective fact and therefore anyone who disagrees with you must be ignorant of those facts/an ignoramus.
16. You're like some extremist born again Christian who can't separate faith from fact, gets very upset with anyone who refutes their "facts" and just keeps preaching their faith as fact regardless!

G

1. Yes, they are repeats because the facts are still the same. I repeat them because we may have new people on this board who don't go 30 pages into the past.
2. Refuted only in your mind. I'm not purposedly insulting anyone here. If you get insulted it's your problem. Why do you get so triggered by my posts? You certainly don't behave like some with your experience in the field of sound engineering should behave. I would take you much more seriously if you recognized at least some of my points correct while offering solid arguments for your disagreements.
3. Natural in this context means that the spatial cues, however obtained (acoustic binaural recording, VST plugins in DAW or any other way) have somewhat natural levels of parameters such as ILD, ITD, ISP and reverberation so that the unnatural nature of the sound does not cause unnecessory listening fatique nor distortion of spatial information. Mixing being ART doesn't automatically make it great ART. Undertanding spatiality helps creating better ART, just as knowing music theory helps composing better music. Maybe music production should be about avoiding what would not occur "naturally" and exploring ARTistical possiblities within that framework? All artists need to ask themselves whether their artistical goals make sense, especially if you produce music for other people, the consumers. You have gotten away with nonsensical spatiality because most consumers are spatially ignorant.
4. My opinions regarding this issue are grounded in scientific facts (studied in the university) and careful thinking of the implications since 2012 after realizing the existence of excessive spatiality in certain type of music reproduction scenarious. Excessive spatiality exists in most recordings whether it's due to ignorance or not. My "opinion" about what is excessive spatiality is based on two things: The science behind human spatial hearing (HRTF etc.) and my own listening experiences which are well in line with the established science. I'm ready to finetune and refine my understanding if needed, but it seems I am quite close to the truth. However you telling me I have to accept excessive spatiality because it's ART is not something that will change my mind.
5. If excessive spatiality is desired intention, it fails with speakers. Widening sound on speakers "outside" is about fooling spatial hearing to think the sound source is outside the line between the speakers, but does it create excessive spatiality at low frequencies? No. It is two speakers playing what is fed to them, both creating natural spatiality furher "softened" by the room acoustics at the ears of the listener, only the resulting sound has such natural spatial cues that fool spatial hearing. In principle this is no different from fooling hearing with monophonic phantom center channel or sound panned somewhere between the speakers. We hear the sound coming from a direction were there's no sound source. Such sound is not fatiquing, because there's no unnaturality to it. However the same signal fed to headphones (without crossfeed) creates unnatural spatiality and the sound becomes fatiquing. Why is headphone crossfeed "lower quality", but acoustic crossfeed + room acoustics isn't lower quality? If speakers and headphones give totally different spatiality (former natural and latter unnatural), which one is the intent of the artist? I believe that using proper crossfeed I get closest to the intent of the artist. I do not believe the ART of King Crimson is about excessive spatiality at all! I believe their ART is about masterful guitar playing, inventive time signatures, musical energy, harmony, melodies, etc. All that stuff gets to my mind best when I use proper crossfeed. Very strange if the intent is often something that sounds bad to me and vice versa what sounds best to me is against artistical intent! I don't dictate what is natural. Our spatial hearing dictates it. It's biology. The size and shape of our head makes it so that you can have large ILD at low frequencies only by bringing the sound source VERY near the other ear. For ILD of ~10 dB this distance is about 5 inches and for ILD of ~20 dB just one inch! It that the intent? Have the band playing a few inches of your head? My intent would be have a large soundstage, depth. For that you need very small ILD (1 dB or less!) at low frequencies (+ other spatial cues such as reflections and reverberation of course). Instead of large ILD you use ITD at low frequencies (0-640 µs). I'm not claiming expertise on music production, althou I know something about that too and I am not totally ignorant. I'm interested to learn more about music production. Youtube has tons of videos about that and I am watching them. I claim expertise on spatiality hearing and I am proposing what music production should be in regards of that. You act like music was all about (excessive) spatiality when it's only one aspect of it. Important, but still only one of the many important aspects.
6. Believe or not, I understand all of this. Proper crossfeed doesn't do more harm than good. If it did, it wouldn't be proper crossfeed, but "too much" crossfeed! Sometimes proper crossfeed = no crossfeed. That happens when the recording doesn't have excessive spatiality. Of course crossfeed doesn't emulate the whole acoustic transfer function between speakers and ears. It addresses the thing that is actually harmful, namely lack of acoustic crossfeed. Compared to studio acoustics, speaker listening has usually too much acoustics while headphone listening hasn't got any except that in the recording itself. In both cases the "error" is natural unless the recording lacks totally any acoustics. Lack of acoustic crossfeed is the only "unnatural" problem we need to fix. If you want the acoustics of your listening room to be incorporated into the sound, you fix it by putting headphones away and listening to your speakers (aka Bigshot hack). Audio reproduction is about making compromises. To make the best compromises one needs to know the importance of contradicting properties. You allow many "insignificant" problems if that fixes a major problem such as excessive spatiality. Surely you know that? Right?
7. You are the one insisting we listeners should share your ARTistical vision about spatiality even when is contradicts the fundamental principles of human spatial hearing. My opinions are based on established science, something that is witnessed by the fact that crossfeed lovers exists, people who are open-minded enough to recognise the benefits. These people existed long before I discoved crossfeeding. People get used to excessive spatiality thinking it's normal and correct. I was one of those people before 2012, spatially ignorant. Crossfeed is about doing headphone listening more correctly. Bass frequencies become more realistic/physical, stereo image gets more precise and listening fatique disappears. The sound just becomes more natural. All of this is a strong proof of a working method to improve headphone listening. So, I have all this to back up my "opinion". I also advice people to use the correct (proper) amount of crossfeed which sometimes is zero crossfeed and warn about too much crossfeed. In this context your constant claims of me being totally ignorant is unwarranted to say the least and I'm confident most readers of our post will agree. What I do lack is the biases of sound engineer bubble. That much is clear.
8. Not at all. I am not at all proud of discovering crossfeed 2 decades after learning the science behind it! Things like this happen, because we are human. Some other things I have realized very quickly / young and that balances things out...
9. Not talked about much in public among the people who consume music.
10. Compression is an artform too, but that doesn't mean loudness war is a positive thing. it causes listening fatique too! "Illegal" = illogical. Large ILD at low frequencies means logically sound source very near the other ear, which alone is a bit weird, but other spatial cues such as reverberation may suggest a sound far away => illogical/illegal spatiality. What is this fetish of bands playing on my shoulders and at my ears annoyingly? Nasty ART! What is this fetish of fake bass? What is this fetish of fuzzy stereo image where impulse-like sounds breaks into fragments all around? It doesn't matter how long music production has been not spatially "natural". Maybe it's time to end the lunacy and start doing things correctly? In fact that's the case and a lot of modern music has better spatiality than older productions. So much better than early ping pong recordings! Also, if there's excessive spatiality, crossfeed helps to fix the problems so all good. Speakers don't have illegality problems. It is a headphone thing.
11. If you mix 100 tracks which all have "legal" spatiality, the downmix is legal too, but probably too narrow. Individual tracks can have certain amount of excessive spatiality, because tracks mask each other more or less. You can even use hard left/right panning on some tracks if you know what you doing since illegalities are masked by other tracks.
12. Speakers + room forces natural spatiality, spatiality of two loudpeakers playing in a room. Even if you put one of your ears near one speaker, the acoustics leak sound to your other ear and reduce ILD.
13. No, only with headphones. Speaker sound can be very colored and all due to acoustics, but the spatiality is 100 % natural: No excessive ILD, ITD,... …both ears are in the same acoustic space experiencing the same acoustic waves. If your left ear experiences strong bass, your right ear will experience it too! Maybe 0.9 dB quieter and 218 µs later, but very similarly nevertheless. Put headphones on and all bets are off! Who knows what kind of ART-vision the sound engineer had!
14. Speakers + room = always legal (natural) spatiality. Headphones without crossfeed = often illegal (unnatural) spatiality. Headphones with proper or stronger crossfeed = legal (natural) spatiality.
15. It's the objective facts that made me have the realization in the first place. Sure, my understanding of the issue has deepened from the initial realization, but that's completely normal and the way our understanding develops. I am talking about scientific facts and how they relate to headphone listening ireflecting my understanding/knowledge of it. Readers can make their own conclusions. I am encouraging use of crossfeeder rather than forcing it.
16. My facts haven't been refuted. You can't swipe away decades of scientific research on human spatial hearing just calling me ignorant.
 
Feb 14, 2019 at 4:37 PM Post #784 of 2,146
1. Sometimes I indeed prefer the CD version of an album, if its hi-res remastered version has squashed dynamics. There are CDs that sound excellent and are recorded great.

2. But under all other conditions being equal, I let a hi-res variant remain in my collection and I delete its CD version from my hard drive. Even not going into the discussion whether a hi-res can sound better than its CD counterpart, I prefer to feed my VST plugins with 24-bit resolution files than 16-bit files as it increases the quality of processing.

3. When I listen through heaphones, I use 112dB Redline Monitor for crossfeed effect. Through speakers, I use MathAudio Room EQ for digital room correction. These are my 2 VST plugins that I always use (plus dithering + monitoring). In 5% cases I may also use a VST equalizer (DMG Equilibrium is my favorite), mainly to boost LF by 1.0-2.0 dB.
1. That's a valid reason. 2. I don't think it does. 3. Ok.
 
Feb 14, 2019 at 6:17 PM Post #785 of 2,146
1. Yes, they are repeats because the facts are still the same. I repeat them because we may have new people on this board who don't go 30 pages into the past.
2. Refuted only in your mind. I'm not purposedly insulting anyone here. If you get insulted it's your problem. Why do you get so triggered by my posts? You certainly don't behave like some with your experience in the field of sound engineering should behave. I would take you much more seriously if you recognized at least some of my points correct while offering solid arguments for your disagreements.
3. Natural in this context means that the spatial cues, however obtained (acoustic binaural recording, VST plugins in DAW or any other way) have somewhat natural levels of parameters such as ILD, ITD, ISP and reverberation so that the unnatural nature of the sound does not cause unnecessory listening fatique nor distortion of spatial information. Mixing being ART doesn't automatically make it great ART. Undertanding spatiality helps creating better ART, just as knowing music theory helps composing better music. Maybe music production should be about avoiding what would not occur "naturally" and exploring ARTistical possiblities within that framework? All artists need to ask themselves whether their artistical goals make sense, especially if you produce music for other people, the consumers. You have gotten away with nonsensical spatiality because most consumers are spatially ignorant.
4. My opinions regarding this issue are grounded in scientific facts (studied in the university) and careful thinking of the implications since 2012 after realizing the existence of excessive spatiality in certain type of music reproduction scenarious. Excessive spatiality exists in most recordings whether it's due to ignorance or not. My "opinion" about what is excessive spatiality is based on two things: The science behind human spatial hearing (HRTF etc.) and my own listening experiences which are well in line with the established science. I'm ready to finetune and refine my understanding if needed, but it seems I am quite close to the truth. However you telling me I have to accept excessive spatiality because it's ART is not something that will change my mind.
5. If excessive spatiality is desired intention, it fails with speakers. Widening sound on speakers "outside" is about fooling spatial hearing to think the sound source is outside the line between the speakers, but does it create excessive spatiality at low frequencies? No. It is two speakers playing what is fed to them, both creating natural spatiality furher "softened" by the room acoustics at the ears of the listener, only the resulting sound has such natural spatial cues that fool spatial hearing. In principle this is no different from fooling hearing with monophonic phantom center channel or sound panned somewhere between the speakers. We hear the sound coming from a direction were there's no sound source. Such sound is not fatiquing, because there's no unnaturality to it. However the same signal fed to headphones (without crossfeed) creates unnatural spatiality and the sound becomes fatiquing. Why is headphone crossfeed "lower quality", but acoustic crossfeed + room acoustics isn't lower quality? If speakers and headphones give totally different spatiality (former natural and latter unnatural), which one is the intent of the artist? I believe that using proper crossfeed I get closest to the intent of the artist. I do not believe the ART of King Crimson is about excessive spatiality at all! I believe their ART is about masterful guitar playing, inventive time signatures, musical energy, harmony, melodies, etc. All that stuff gets to my mind best when I use proper crossfeed. Very strange if the intent is often something that sounds bad to me and vice versa what sounds best to me is against artistical intent! I don't dictate what is natural. Our spatial hearing dictates it. It's biology. The size and shape of our head makes it so that you can have large ILD at low frequencies only by bringing the sound source VERY near the other ear. For ILD of ~10 dB this distance is about 5 inches and for ILD of ~20 dB just one inch! It that the intent? Have the band playing a few inches of your head? My intent would be have a large soundstage, depth. For that you need very small ILD (1 dB or less!) at low frequencies (+ other spatial cues such as reflections and reverberation of course). Instead of large ILD you use ITD at low frequencies (0-640 µs). I'm not claiming expertise on music production, althou I know something about that too and I am not totally ignorant. I'm interested to learn more about music production. Youtube has tons of videos about that and I am watching them. I claim expertise on spatiality hearing and I am proposing what music production should be in regards of that. You act like music was all about (excessive) spatiality when it's only one aspect of it. Important, but still only one of the many important aspects.
6. Believe or not, I understand all of this. Proper crossfeed doesn't do more harm than good. If it did, it wouldn't be proper crossfeed, but "too much" crossfeed! Sometimes proper crossfeed = no crossfeed. That happens when the recording doesn't have excessive spatiality. Of course crossfeed doesn't emulate the whole acoustic transfer function between speakers and ears. It addresses the thing that is actually harmful, namely lack of acoustic crossfeed. Compared to studio acoustics, speaker listening has usually too much acoustics while headphone listening hasn't got any except that in the recording itself. In both cases the "error" is natural unless the recording lacks totally any acoustics. Lack of acoustic crossfeed is the only "unnatural" problem we need to fix. If you want the acoustics of your listening room to be incorporated into the sound, you fix it by putting headphones away and listening to your speakers (aka Bigshot hack). Audio reproduction is about making compromises. To make the best compromises one needs to know the importance of contradicting properties. You allow many "insignificant" problems if that fixes a major problem such as excessive spatiality. Surely you know that? Right?
7. You are the one insisting we listeners should share your ARTistical vision about spatiality even when is contradicts the fundamental principles of human spatial hearing. My opinions are based on established science, something that is witnessed by the fact that crossfeed lovers exists, people who are open-minded enough to recognise the benefits. These people existed long before I discoved crossfeeding. People get used to excessive spatiality thinking it's normal and correct. I was one of those people before 2012, spatially ignorant. Crossfeed is about doing headphone listening more correctly. Bass frequencies become more realistic/physical, stereo image gets more precise and listening fatique disappears. The sound just becomes more natural. All of this is a strong proof of a working method to improve headphone listening. So, I have all this to back up my "opinion". I also advice people to use the correct (proper) amount of crossfeed which sometimes is zero crossfeed and warn about too much crossfeed. In this context your constant claims of me being totally ignorant is unwarranted to say the least and I'm confident most readers of our post will agree. What I do lack is the biases of sound engineer bubble. That much is clear.
8. Not at all. I am not at all proud of discovering crossfeed 2 decades after learning the science behind it! Things like this happen, because we are human. Some other things I have realized very quickly / young and that balances things out...
9. Not talked about much in public among the people who consume music.
10. Compression is an artform too, but that doesn't mean loudness war is a positive thing. it causes listening fatique too! "Illegal" = illogical. Large ILD at low frequencies means logically sound source very near the other ear, which alone is a bit weird, but other spatial cues such as reverberation may suggest a sound far away => illogical/illegal spatiality. What is this fetish of bands playing on my shoulders and at my ears annoyingly? Nasty ART! What is this fetish of fake bass? What is this fetish of fuzzy stereo image where impulse-like sounds breaks into fragments all around? It doesn't matter how long music production has been not spatially "natural". Maybe it's time to end the lunacy and start doing things correctly? In fact that's the case and a lot of modern music has better spatiality than older productions. So much better than early ping pong recordings! Also, if there's excessive spatiality, crossfeed helps to fix the problems so all good. Speakers don't have illegality problems. It is a headphone thing.
11. If you mix 100 tracks which all have "legal" spatiality, the downmix is legal too, but probably too narrow. Individual tracks can have certain amount of excessive spatiality, because tracks mask each other more or less. You can even use hard left/right panning on some tracks if you know what you doing since illegalities are masked by other tracks.
12. Speakers + room forces natural spatiality, spatiality of two loudpeakers playing in a room. Even if you put one of your ears near one speaker, the acoustics leak sound to your other ear and reduce ILD.
13. No, only with headphones. Speaker sound can be very colored and all due to acoustics, but the spatiality is 100 % natural: No excessive ILD, ITD,... …both ears are in the same acoustic space experiencing the same acoustic waves. If your left ear experiences strong bass, your right ear will experience it too! Maybe 0.9 dB quieter and 218 µs later, but very similarly nevertheless. Put headphones on and all bets are off! Who knows what kind of ART-vision the sound engineer had!
14. Speakers + room = always legal (natural) spatiality. Headphones without crossfeed = often illegal (unnatural) spatiality. Headphones with proper or stronger crossfeed = legal (natural) spatiality.
15. It's the objective facts that made me have the realization in the first place. Sure, my understanding of the issue has deepened from the initial realization, but that's completely normal and the way our understanding develops. I am talking about scientific facts and how they relate to headphone listening ireflecting my understanding/knowledge of it. Readers can make their own conclusions. I am encouraging use of crossfeeder rather than forcing it.
16. My facts haven't been refuted. You can't swipe away decades of scientific research on human spatial hearing just calling me ignorant.

Subscribed.

I like fireworks :wink:
 
Feb 14, 2019 at 7:42 PM Post #786 of 2,146
2. I don't think it does.

How come "it doesn't"?

If you have two audio files, one of them is16 bit and the other one is 24 bits (not just 16 bits padded with zeros, but a legitimate 24 bit file), then it's better to feed into your DAW the 24 bit file.

It processes at different bit rates depending on the file?

No, its does not. The DAW processes both 16 bit files and 24 bit files at the same bit rate (32 bits).
 
Feb 15, 2019 at 2:55 AM Post #787 of 2,146
I guess it doesn't matter what the bit rate of the file is if it processes all files at the same rate. That was what I suspected.
 
Feb 15, 2019 at 3:46 AM Post #788 of 2,146
[1] I think all the DSPs I work with process at a fixed level (I'm guessing 16/44.1 PCM) and output at that level.
[2] How can you check to see that the file being played isn't being transcoded up or down when you use processing?
[2a] Does it output a processed file you can look at?
[2c] What kind of processing are you doing that requires a high rate?

1. That's generally not the case, it varies depending on the plugin but none of them process at 16bit and as far as I'm aware no plugin has ever processed at 16bit, although it's possible some very early plugins (in the early 1990's) did, before DAWs were professionally popular. Today and for many years plugins operate at a fixed bit depth, which is either 32bit float or 64bit float, depending on which DAW you're using, which of those two bit depths it supports and which version of the plugin you have installed. On the sample rate side of things it also varies, some plugins will process at whatever sample rate your DAW is set to, which could be 44.1kHz, others have a fixed processing sample rate but typically that is not 44.1kHz, typically it would be 96kHz. Convolution reverb plugins are a good example of this, though there are other examples, modelled plugins such as some EQs, compressors, limiters, guitar amp emulators, etc. And still other plugins may oversample, not have a fixed sample rate but also not use the sample rate of the DAW/environment but some multiple of it.

2. There is no way of knowing unless: A. The documentation actually tells you, although typically the documentation only tells you when the plugin has the user definable feature to turn oversampling on or off or B. Ask the developer and hope they'll divulge the answer.
2a. The output file doesn't tell you anything about the sample rate the plugin processed at, the plugin will output the sample rate the DAW/Environment is set to. If we take a typical convolution reverb as an example, the data flow and processing would be like this: The 16 or 24bit audio file/s will be loaded into the DAW/environment and converted to 32 or 64bit float, the sample rate will remain the same or be converted if the DAW is set to a different sample rate (than the input audio files). Let's say your input files are 44.1kHz and your DAW is set to 44.1kHz, in which case no sample rate conversion is done by the DAW but the bit depth will be converted to 32bit (or 64bit). The DAW will then pass this 44.1kHz/32bit (or 64bit) file to the reverb plugin, which will upsample it to 96kHz, process it at 96kHz/32bit (or 64bit) and then downsample it back to 44.1kHz for output. If on the other hand your DAW/environment is set to 192kHz sample rate, the plugin will be fed 192kHz/32bit (or 64bit) which it will downsample to 96kHz, process at 96kHz/32bit (or 64bit) and then upsample it's output to 192/32 (or 64) to match the DAW environment.
2c. There are various reasons why the processing sample rate could legitimately be different (higher or lower). In the case of a convolution reverb for example, it's far more practical (for both the developer and end user) to supply the impulse responses at one sample rate (96kHz for example) and convert the input from the DAW to that sample rate, rather than supplying each impulse response in every sample rate and not converting the DAW input sample rate. Another reason would be some/many modelled plugins, plugins which emulate some vintage bit of kit like a vintage EQ, compressor, limiter, guitar amp or analogue synth for example. Prized vintage kit is prized because of some non-linear behaviour, typically IMD and/or various other non-linear distortions. It is therefore often necessary to over or upsample so that the ultrasonic freqs causing the IMD can be generated and then downsampled again, once the IMD product (in the audible freq band) has been created. Another reason again, would be in the case of a true peak (TP) compressor or limiter.

won't a VST use whatever bit depth the player/DAW/... is using, so most likely 32 or 64bit?

Yes, a VST plugin will always operate at either 32 or 64bit, although it can be difficult to know which of those two bit depths is actually being used for pocessing. The VST marketplace is unregulated and there's nothing to stop an unscrupulous developer from taking say a 64bit input (from the DAW/Environment) truncating it to 32bit, processing at 32bit and then padding the output with zeros back to 64bit again. The opposite is also possible, taking a 32bit input, processing at 64bit and then truncating (or rounding or dithering) the output back to 32bit again. There's no way of knowing if this is occurring without hacking the plugin and analysing the code or unless the developer actually states what the plugin is doing.

If you have two audio files, one of them is16 bit and the other one is 24 bits (not just 16 bits padded with zeros, but a legitimate 24 bit file), then it's better to feed into your DAW the 24 bit file.
The DAW processes both 16 bit files and 24 bit files at the same bit rate (32 bits).

These two statements are contradictory. As BOTH 16bit and 24bit files are converted to 32bit (or 64bit) float and processed at 32 or 64bit float, why is it better to feed your DAW 24bit files?

G
 
Last edited:
Feb 15, 2019 at 5:52 AM Post #789 of 2,146
How come "it doesn't"?

If you have two audio files, one of them is16 bit and the other one is 24 bits (not just 16 bits padded with zeros, but a legitimate 24 bit file), then it's better to feed into your DAW the 24 bit file.
What is a legitimate 24 bit file in this context?

It may not be just 16 bits padded with zeros, but it certainly isn't 16 bits padded with extra music, unless of course you have music with a greater dynamic range than 96db.

What music do you listen to that exceeds that exceeds 96db of dynamic range?
 
Feb 15, 2019 at 6:40 AM Post #790 of 2,146
As BOTH 16bit and 24bit files are converted to 32bit (or 64bit) float and processed at 32 or 64bit float, why is it better to feed your DAW 24bit files?
G

Because of the extra precision these additional 8 bits provide.

A 32bit file converted from a 16 bit file will have extra 16 empty bits which are stuffed with zeroes. 16 bits will be legitimate audio and 16 other added bits will be "zero-stuffing".

But a 32 bit file converted from a 24 bit file will have only 8 empty bits. 24 bits are legitimate and 8 only are "zero-stuffing".

Read:
Why bother with 24-bit DAC

Quote: "Conclusion... As you can imagine, the difference between 16-bit and 24-bits is about the extra precision those 8 bits can provide. Manipulation of the data like volume attenuation even to a significant degree (like -25dB) will not result in loss to low-level detail and subtle nuances will be passed on to a good hi-res DAC after DSP manipulation. Of course, audio engineers have been using 24 or even 32-bit audio in the professional setting for ages for the best audio quality. ... I personally am not of the camp that would forego readily accessible technological improvements like 24-bit resolution."
 
Feb 15, 2019 at 6:52 AM Post #791 of 2,146
I guess it doesn't matter what the bit rate of the file is if it processes all files at the same rate. That was what I suspected.

Are you saying that you can feed your 32bit DAW or your 32bit software audio player with 24bit or 16bit or 8 bit or even 2bit audio files and it will not matter because all of them will be processed anyway at the same rate? Ahaha.

Why don't you download your movies in 320 x 240 resolution to watch them on your 4K UHD TV (3840 x 2160)? There won't be any difference between 320x240 video files and 1920x1080 video files, because all of them will be processed at 4K resolution anyway and shown on 4K TV . Is that your logic?
 
Feb 15, 2019 at 7:30 AM Post #792 of 2,146
How come "it doesn't"?

If you have two audio files, one of them is16 bit and the other one is 24 bits (not just 16 bits padded with zeros, but a legitimate 24 bit file), then it's better to feed into your DAW the 24 bit file.

Sorry about my short responses. I was exhausted after replying to gregorio…

16/44.1 is all you really need. You don't need more than 13 bit worth (~80 dB) of dynamic range. So, even if 16 bit files didn't use the highest 3 bits they would be enough, just very quiet. 80 dB dynamic range is enough. If you listen to so loud that peaks go to 100 dB, the noise floor is at ~20 dB with flat dither and perceptually ~0-10 dB with shaped dither. You can hear sounds this quiet (so quiet that the blood traveling in your veins start to mask them!) only within the most sensitive bandwidth of you hearing (500-5000 Hz) in extremely quiet places such as unechoic room after you have spend some time in there in silence. When you listen to music peaking at 100 dB you do not hear them, 60 dB of dynamic range in a recording is extreme. 24 bits does not even give better sound, because lower noise floor is all it gives when dither is used and 16 bits already give lower noise floor then you or anyone else consuming music needs.

In plugins the calculations are done at higher bit depth anyway and even at higher temporary sample rate if necessory/beneficial (for example distortion plugins create harmonics which would cause aliasing otherwise).
 
Feb 15, 2019 at 7:45 AM Post #793 of 2,146
Why don't you download your movies in 320 x 240 resolution to watch them on your 4K UHD TV (3840 x 2160)? There won't be any difference between 320x240 video files and 1920x1080 video files, because all of them will be processed at 4K resolution anyway and shown on 4K TV . Is that your logic?

CD already is "hi-res" to our ears, but for example DVD isn't "hi-res" to our eyes. When you are past the relolution of your hearing or vision, increasing the resolution doesn't chance anything. 16 bit / 44.1 kHz audio is like 8K video. 320 x 240 resolution is like listening to 8 bit/11025 kHz "telephone" sound. Don't let video resolutions fool you. Sound is different than picture and CD already reached the limits of human hearing, while video is only now reaching it slowly.
 
Feb 15, 2019 at 8:12 AM Post #794 of 2,146
Because of the extra precision these additional 8 bits provide.

A 32bit file converted from a 16 bit file will have extra 16 empty bits which are stuffed with zeroes. 16 bits will be legitimate audio and 16 other added bits will be "zero-stuffing".

But a 32 bit file converted from a 24 bit file will have only 8 empty bits. 24 bits are legitimate and 8 only are "zero-stuffing".

Read:
Why bother with 24-bit DAC

Quote: "Conclusion... As you can imagine, the difference between 16-bit and 24-bits is about the extra precision those 8 bits can provide. Manipulation of the data like volume attenuation even to a significant degree (like -25dB) will not result in loss to low-level detail and subtle nuances will be passed on to a good hi-res DAC after DSP manipulation. Of course, audio engineers have been using 24 or even 32-bit audio in the professional setting for ages for the best audio quality. ... I personally am not of the camp that would forego readily accessible technological improvements like 24-bit resolution."

Read the article fast. I think it's a bit of a mess. Yes, 24 bit is beneficial when attenuating 25 dB, but the benefits are there also with 16 bit signals! The best way to attenuate a signal is bit-shifting.You do nothing to the signal. Instead of using the highest 16 bits of 24, you use the middle 16 bits for example to have 24 dB attenuated identical version. 0.75 times the signal is -2.5 dB, you use powers of 2 and add them to have your scaling coefficient like 0.75 = 0.5 + 0.25 = 2^(-1) + 2^(-2). Nothing is added or taken away. The signal is just scaled. Nobody is against 24 bit DAC! In studios you of course use 24 bit sound, because your levels aren't optimized! You need safety margin. Strange article.
 
Feb 15, 2019 at 8:13 AM Post #795 of 2,146
Are you saying that you can feed your 32bit DAW or your 32bit software audio player with 24bit or 16bit or 8 bit or even 2bit audio files and it will not matter because all of them will be processed anyway at the same rate? Ahaha.

Why don't you download your movies in 320 x 240 resolution to watch them on your 4K UHD TV (3840 x 2160)? There won't be any difference between 320x240 video files and 1920x1080 video files, because all of them will be processed at 4K resolution anyway and shown on 4K TV . Is that your logic?
I don't remember one time where someone brought up the visible changes of increasing video resolution to a digital audio argument, that wasn't trying to support a logical fallacy. the basic notion that low resolution video is readily visible as being non transparent, while 16bit is already beyond audibility under typical conditions, is more than enough to debunk your argument.

Because of the extra precision these additional 8 bits provide.

A 32bit file converted from a 16 bit file will have extra 16 empty bits which are stuffed with zeroes. 16 bits will be legitimate audio and 16 other added bits will be "zero-stuffing".

But a 32 bit file converted from a 24 bit file will have only 8 empty bits. 24 bits are legitimate and 8 only are "zero-stuffing".

Read:
Why bother with 24-bit DAC

Quote: "Conclusion... As you can imagine, the difference between 16-bit and 24-bits is about the extra precision those 8 bits can provide. Manipulation of the data like volume attenuation even to a significant degree (like -25dB) will not result in loss to low-level detail and subtle nuances will be passed on to a good hi-res DAC after DSP manipulation. Of course, audio engineers have been using 24 or even 32-bit audio in the professional setting for ages for the best audio quality. ... I personally am not of the camp that would forego readily accessible technological improvements like 24-bit resolution."
zero padding will allow the processing to apply with high precision. it's the same reason why some VSTs work at a fixed sample rate and will resample the original signal because it simply works better that way. then they go back to the original sample rate once the processing is done. nobody is denying the benefits of having more bits to process something. be it for ease of use(gain changes without concern), or to maintain quality as long as possible when a great many processes are going to be applied(like making an album). that specific part can actually be compared to processing video or pictures.
but you mustn't mistake that rational with some notion of audibility. the moment you decided to apply VSTs to your playback chain, you abandoned fidelity in favor of making something subjectively better(not objectively so!!!!!!). it's a choice, I happen to do that pretty much all the time. and yes I output my signal to 24bit because of all the gain attenuation I often end up applying on the signal(just replaygain can result in more than 10dB so those stuff can rapidly pile up if we're not careful). I do that so my 16bit track is "moved down" within the 24bit container instead of getting truncated at 16bit. so I'm not contesting the benefits of having higher bit depth at all. but I am contesting the significance of using a 24bit track vs using a 16bit track as an audible argument(with or without VST). my albums don't have 90dB of dynamic, I don't notice the benefits of more accurate background noise being recorded, and I can't say that I have noticed a VST making an obvious audible difference because the original file was 24bit instead of 16. if you have examples of that, I'd be interested to see them. but if it's your guts talking about what is intuitive for you, then sorry that doesn't convince me.
 

Users who are viewing this thread

Back
Top