[1] So, if I have a bunch of music in 24 bit, and I only use it for listening, no mixing or anything.. Is it safe to say that a can just convert everything to 16 bit (to save some space) and I won't hear a difference?
[2] And a different question. 24 bit can go up to 192khz. ...So my question is, say I have Really good headphones, a Really good source, and a Really good recording.. will I be able to "feel" the difference between 48 vs 192khz? I just want it to sound as natural and realistic as possible.
[3] The real world has unlimited (almost) frequencies, thought we can't even hear them.
[3a] But for example: dog whistles. We can't here them, but there where a couple of time I was walking with a friend in a park with his dog, and I was at a distance, and he just blew the whistle (I was looking in a different direction), but for some reason I just know that "something" is different, and I look in that direction.
1. True.
2. Higher sample rates only make a difference in capturing very high freqs, for very low freqs they make no difference. You won't be able to hear or feel the difference between 48 and 192kHz with really good headphones. There's a chance you might if using not so good headphones though, because they're liable to produce distortion in the audible range in response to very high/ultrasonic frequency content (this is called IMD, Inter-Modulation Distortion).
3. But we're not dealing with the real world, we're dealing with the music world. Musical instruments have been designed by humans for human hearing and produce relatively little or in some cases no freqs beyond 20kHz. There are some exceptions, mainly metalophones like gamalan or glockenspiel for example but at typical audience listening distances they still have relatively little ultrasonic content and have not been reliably differentiated with 48 vs 192kHz recordings.
3a. As mentioned, some dog whistles aren't quite ultrasonic, they're just within the limits of human hearing baring in mind they output extremely high sound pressure levels. Even the truly ultrasonic dog whistles can produce distortion just within the limits of human hearing when blown hard.
[1] AHHH... So you were saying that even 35-ought years ago, they had higher than 24bit mixers. I didn't pick up that inference the first time.
[2] Well what about recording? Still stuck with 16/44.1 for sessions?
[2a] Then even a 64bit mixer wouldn't do a whole lot of good. Kind of like importing a 128k mp3 and exporting a 320k mp3 from it. Not very useful.
1. The first widely used digital mixer was introduced in 1987 (Yamaha DMP7), I'm not sure what internal processing it had but believe it was more than 24bit. However, it wasn't used in the music recording industry, it was used in live sound/music reinforcement and broadcast. This was due to the functionality it offered (total recall, flying faders, automation, etc.) over analogue desks, not the sound quality, which wasn't great. Around 1990 Yamaha introduced the DMC1000, which did have good sound quality and was occasionally used in the music recording industry but still mainly in broadcast and live sound. Trevor Horn used one on Seal's first album (Crazy, Killer, etc.) but I don't recall if he used it exclusively, certainly digital mixing wasn't used for mastering until well into the millennium. The DMC1000 had 28bit internal processing, later (mid 1990s) digital desks used in music recording were 32bit float, Sony's Oxford desk for example.
2. I've already said! It was mainly 16/44.1 for recording up until the millennium, although 20bit became more widespread starting in the early 1990's up until 24bit around the millennium.
2a. No, you're not getting it, even though you yourself brought it up! Sure, there's no benefit whatsoever of importing 16/44 into a 64bit mixer but then the whole point of a digital mixer is to mix, not just to import! And as you mentioned, each and every processor within a digital mixer adds quantisation noise which accumulates. Baring in mind the average rock song (and all other non-acoustic popular genres) employs dozens of processes/processors, the quantisation noise would become unacceptable, regardless of whether the original tracking is at 16bit or 24bit. 64bit processing allows thousands of processes before the accumulated quantisation noise would become audible, which covers any eventuality.
[1] The irony is that those later "remastered" CDs which you constantly criticise are the product of 24 bit processing.
[2] The flat transfers which feature on most 1980s CDs are the product of 16 bit and analog workstation limitations.
1. As I've mentioned, "those later remastered CDs" were not the product of 24bit processing, as there's never been 24bit processing used in commercial music production to my knowledge. It was the product of 32bit float or 48bit fixed processing.
2. To clarify, with the exception of some/a few classical recordings, the process was: Recording on multi-track tape, constantly replaying that tape out to an analogue desk (with "outboard" gear) where it was mixed. When all the desk's (and outboard gear's) parameters we adjusted to produce the desired mix, the result was "bounced" (recorded) back down to tape. There was no such thing as an analog workstation. Digital recording did not change this workflow at all, it was still multi-track tape, constantly replayed out to an analogue desk, etc. The only difference was that the multi-track tape recorded digital audio (and then replayed it through the recorder's DACs) rather than analogue audio. The only workflow change was in editing, as digital audio tape couldn't be spliced and was a rather more involved and time consuming process. Digital workstations started to be used in the mid 1990s but not as workstations, they we're used for editing because they massively reduced the editing time, as well at it being more accurate and non-destructive (and bit depth is irrelevant to editing as there's no processing involved). It wasn't until the very end of the 1990's that they started being used as workstations (IE. For recording, editing and mixing), the first No.1 done this way was Ricky Martin's "Living La Vida Loca" in 1999, although outboard analogue gear was still employed and the mastering was still analogue. Fully ITB (In The Box, no analogue outboard gear) didn't start really taking over in the commercial music world until the mid 2000's, quite a few years after 32bit or 48bit fixed mix environments/processing was standard, with mastering being the last bastion to hold out for a few years more.
Baring all the above in mind, I don't really understand what you mean by "flat transfers" or what TheSonicTruth is trying to say with his response?
G