@bookman you're still mixing mp3 bitrate with pcm bitrate, so I rest my case. you don't understand anything of what is said to you however how many times it's done. you're obviously a bot and an admin should remove your account to prevent more spamming of the exact same mistaken message over and over again.
@keith and of course you buy whatever you want, and there is nothing wrong with getting the perfect reproduction.
but I really don't see the rational behind paying for something I can't hear. if using only my human senses I fail to hear a difference and have no way to know unless I look at the numbers. how is it different from suggestion and placebo? at least when I buy a screen with a better gamut, it's something within the threshold of my senses that will be upgraded. I may find the upgrade meaningless and not worth the money, but it will be noticeable. with highres the changes are clearly outside of my hearing threshold. just like radio frequencies are outside of my hearing threshold and I wouldn't care to pay for an album that includes them just like I wouldn't pay more to have infra red on my screen. if it's outside of what my senses can get, it's outside of my subjective experience and audio is only that to me. I don't care about the life of the singer, and I don't care if there are plenty of sound at 24khz. that's not part of my experience of the music.
so while I certainly understand your reasoning, I don't share it.
my second problem with your post is the sound system. how many amps can resolve highres files at normal listening level into a real load? and of course the obvious, how many transducers can hope to get close to even cd quality? so what are we talking about here when we say perfect copy? if in the end our sound system is way below the CD resolution, and we fail to hear a difference, it's really just paying more for an idea.
Actually, I can think of several "rationales" for buying something that doesn't sound "provably" different to me - some practical, some potentially practical, and some just a matter of "self satisfaction"....
1) Even assuming that I can't hear any difference today, I might actually be able to hear one later. While I don't subscribe to the whole "golden ear idea" in general, it is true that our abilities do change over time, and we do sometimes "learn how to listen better". I may attend a live concert and suddenly discover that the speakers I thought sounded "just like live" really don't, or I may simply start paying more attention to certain aspects of the sound that I hadn't noticed before, or I may decide I like binaural recordings of chamber music with lots of natural ambiance as well as multi-tracked pop-rock. (I have one recording that has an odd little sound at one point, which I had always assumed was simply the recording microphone clipping; it turns out that it's a vibration coming from part of the drum; once I realized that, and heard a real drum make that sound, I started noticing whether it sounded natural or not on my speakers and headphones.)
2) Even assuming my abilities don't improve, I might buy better recordings someday, or change other equipment which renders the difference audible. (Certain speakers or amplifiers tend to make certain errors more audible - whether because they're more accurate, or simply because they emphasize them. For example, bright speakers make recordings with poorly recorded high-end sound more obvious. And many people here will surely attest to the fact that they notice things when listening on headphones that they don't when using speakers.)
3) I may have a current or future technical justification. When I take pictures with my camera, a high-quality JPG (lossy) version of most pictures will often look just as good as a true lossless RAW frame - out of the camera. However, when I try to adjust it later in Photoshop, the artifacts on the compressed picture will often become obvious due to the processing. Most of us don't "re-master" our music, but many of us do use "processing" like surround-sound decoders, or spatial processors, or even noise removers, some of which may be affected by differences we can't hear, and they may be affected in ways that we can hear. (To use an example from the days of vinyl and SQ surround sound. The decoders used to play SQ encoded material use phase relationships to decide which parts of the audio belongs in which speaker. In many cases, if you have a record that has been mechanically damaged, the distortion from the damage is "pushed into the rear channels" at a boosted level by the decoder, which can make an album that sounds only slightly damaged in stereo virtually unlistenable through the decoder.) In the current context, next year's surround decoder may use high frequency phase cues present in the music to locate various instruments, and so may work with 96k recordings and not 44k ones. (While I agree that we can never know for sure where that would end, making recordings with at least a little bit of safety margin seems prudent.)
4) Not all "limits" are as black-and-white as many people think. To take your example; most people would agree that buying a monitor that would display a gamut up to 850 nm would be silly (that's the "color" used by many IR remote controls). However, to say that "you can't ever see it" is wrong. In fact, light of that color is visible to most people, but only if it's bright enough. (If you look at the dot from "an invisible 850 nm LASER", you will indeed see it as a faint pink visible dot, because it is in fact
slightly visible to most people.) In that case, I would agree that being able to see that color on your monitor probably serves no useful purpose, but I'm not so sure that everything that isn't directly audible is "useless".
5) Sometimes extra "safety margin" serves other benefits. For example, even though most of us probably don't hear much above 20 kHz, someone with an engineering background would still avoid an amplifier that was only able to amplify "20 Hz to 20 kHz" or that had a distortion plot that rose sharply right above 20 kHz - because good performance up to 50 kHz or so almost always signifies excellent performance inside the audio band, and performance that fails to extend past 20 kHz tends to suggest that problems exist inside the "audio band", even though they may not be visible on standard measurements. And, for another possibility, there was a recent AES paper that suggested, although I wouldn't say that it rose to the level of proof, that some people notice shifts in the sound stage on recordings that are band-limited to 20 kHz. (Their test showed that, even though their test subjects reported that the recordings "sounded the same", the location of instruments in the sound field was sometimes shifted when a recording was band-limited to 20 kHz. They suggested that, even though a 44k sample rate can record all audible sound, it may not be able to record the phase cues that our brains and ears use to determine location accurately enough.
6) As for your final point..... I simply disagree with the premise. Many of us get better equipment "as we progress in the hobby", and the technology itself improves. It would be foolish to buy something that is audibly inferior simply because my current system isn't able to let me hear its flaws, when the system I own next year may make them obvious. (I hear lots of details on my electrostatic headphones that I never noticed before on my speakers.) It would even be foolish to buy something that doesn't sound any better on
ANY system available today, if that situation is likely to change. Twenty years ago you couldn't buy a TV that would let you see how much better the picture is on a Blu-Ray disc than on a DVD; yet the difference was really there, and now can be seen on most TVs.
In that last situation, if I bought my entire collection of movies as DVDs when they were "the current technology", I might end up buying them all over again as Blu-Ray discs. However, if I'd had the opportunity to buy them as a direct copy of the digital theatrical master, which is better than both DVD and Blu-Ray discs, then I would still have "the best copy available". (That option isn't available for video, but it is equivalent to buying a 24/96k or 24/192k copy of the audio master.)
I know lots of people who bought a significant amount of music on 128k AAC files then, after upgrading their music system, and realizing that the difference was audible to them after all, ended up having to buy it all over again (or pay the upgrade fee). Buying a version that's "a lot better than our ears" rather than one that's "just barely better than we believe is audible" seems like a good form of insurance against that (and, in most other situations, most people I know would consider a "safety margin" to be a good thing.)
I'm going to offer two very different examples - in other contexts - to support my point.
1) There are several devices designed to automatically remove ticks and pops during vinyl playback. Many of them use the ultrasonic content of the audio signal to decide the difference between a "legitimate tick" and a record scratch; because scratches have significant ultrasonic content while music recorded on records does not. You could use one of these devices (or equivalent software) to remove ticks and pops from the archive recordings you had made of your favorite albums if you'd made those recordings at 96k. However, it wouldn't work if you'd recorded them at 44k because the ultrasonic content the device relies upon would be missing.
2) Since you mentioned monitors and visible gamut.... There is a system quite similar to the click and pop remover that is commonly used to automatically detect and repair scratches on slides. It works by recognizing that certain colors of light that are invisible to the human eye are blocked by the surface coating on slide film. (Basically, by scanning the slide at these frequencies, you can produce an "image" of scratches in that surface, and use that information to control the correction process.) Of course, you could only use this system on a scanned and stored image if it included a gamut much wider than the range of the human eye. (So, if someone was archiving important photographs before this system existed, it would benefit them today if they'd scanned them using IR and UV light as well as visible light, even though, at the time, there was no apparent reason to do so.)
To me, all of this simply suggests that "getting the best quality copy you can afford" rather than "one that's just good enough" really does make sense. Now, in that second example, it might not pay to buy a special scanner to record a whole lot of information you might never use. However, in the case of music, where there is already a "starting point" that is limited by the ability of the microphones and mixing equipment we're using, it seems to make sense to hold onto a little extra information - since we already have it anyway - just in case we may want it later.