1. And I'm pretty sure you know that an "if [something] were ..." construct in English grammar is the subjunctive tense, used for counter-factual or hypothetical statements.
2. No, I wouldn't agree. What "you had", what you feed DD/AC3 is 6 channels of 24/48, which is 6,750kbps and what you end up with is a DD datastream of just 448kbps. That's a (lossy) compression ratio of over 15 times, which is more than double the approx. 7 times compression ratio of 16/44 to MP3 192. I'm not of course saying that MP3 192 is therefore more than double the "definition" of DD, for the reasons you mentioned, DD is adaptive, more efficient and additionally, this compression ratio comparison hasn't accounted for the fact that the vast majority of the LFE data can simply be discarded (we really only need a sample rate of a little over 240Hz for perfect reconstruction). However, I'm not sure that the additional channels of 5.1 (the additional localisation/positioning) can be equated to "higher definition", higher fidelity yes but not necessarily "higher definition" (higher resolution). In theory we can achieve a roughly (!) equivalent amount of localisation/positioning with binaural stereo (albeit only in headphones). I believe we're largely in the realm of just semantics here though.
3. As The DD specs for HDTV do allow for a native 2.0 mix and had I thought about it in greater detail, I might have used such a mix as comparison. In which case, a better (though still NOT accurate) comparison would have been a 2.0 DD mix with a 276kbps MP3. Not such a good practical example though, as native 2.0 DD mixes are relatively rare.
You seem to have missed the actual point I was making though. I was not trying to make an accurate comparison or evaluation of DD, MP3 and CD. Just provide a "rough estimate", an estimate with only enough accuracy to show that DD is somewhat lower definition than uncompressed 44.1/16. This point is valuable to make in the context of this thread as it demonstrates that arguably the widest use of the term "high definition" (in HDTV), effectively equates HD visuals to audio which is somewhat below a definition/resolution accepted as "standard definition". My argument being, that what is commonly referred to as "standard def" digital audio (uncompressed 16/44.1) is in practise effectively "infinitely high def" digital audio and superior to even the latest extensions to what is called "ultra high def" in visuals.
G
The problem we're having is with the definition of "definition".
I agree, Dolby Digital AC3 is lossy.
I agree CD Redbook is lossless (assuming the original is 16/44.1).
I don't mean to synthesize a definition here. I look at Definition and High Definition as an evaluation of transport system to reproduce the original signal. If we consider the original signal to be a two channel 16/44.1 bitstream, then Redbook is as high as definition can go, it replicates the original exactly. If we consider the original to be a 24/48 5.1 mix, then Dolby Digital is lossy, and not High Definition. It was higher definition than the mutli-channel delivery methods available at the time it was introduced, and at that time it could have been called High Definition if we'd been using that term. It beat Dolby Stereo optical (matrix), and pretty much knocked heads with six channel magnetic with Dolby SR, though that was not a common delivery format for very long. However, it's not high definition by today's standards.
The problem is stating the CD is Standard def relative to Dolby Digital, the latter being below standard def. You've got your standards mixed up. Look at the goals. The CD's goal is to reproduce two channels. It does that losslessly relative to 16/44. Dolby Digital's goal is to reproduce 5.1, it does that but via a lossy coded. But, if you took that 5.1 signal and tried to get it through the CD, about all you have is the Dolby matrix. And, while the CD's portion of that chain is lossless, or slightly lossy if the original was 24/48, passing the total mix through the matrix is a lossy process, so much so that the mix must be modified somewhat for the intent to make it all the way through. What you end up with is less like the original 5.1 master than the Dolby Digital version.
So, what's "standard def" 5.1 audio for video and digital TV? I would suggest from the consumer's viewpoint it's Dolby Digital. That's what we have had on DVD for some time, digital broadcast, Blueray, and most streaming. Yes, it's lossy, but it's THE common bitstream, that makes it the "standard". Now we have TrueHD and Master Audio, both capable of 24/48 multichannel lossless. That's the new HD audio, but not for broadcast.
My only real issue is the method of comparison used to establish if something is "standard" or below. You can't compare a two channel lossless transport system with a 5.1 channel lossy transport system, the goals aren't the same, and one can't do what the other can. Throwing a reference to .mp3 into the discussion further confuses things. MP3 can't do what either DD or CD can. It's yet another animal. Any comparison of raw bit rate is also irrelevant, because it ignores codec performance and efficiency as well as channel count.
I'm pretty much leaving it here. I guess someone doesn't feel our discussion of the resolution of different audio formats fits into a thread titled "Is there a meaningful limit to resolution?" Go figure.
I'll just close with this comment. A few years ago THX inventor Tom Holman did a series of presentations called "The Bit Rate of Reality". That presentation demonstrated that two channels of "high resolution" audio like 24/96 didn't come close to representing "reality" at all, where his 10.2 channel system with non-exotic audio bitrates (though the demo used discrete lossless channels) could replicate an acoustic space well...reality. Holman has also found that, starting with single channel mono, every time the channel count is doubled, every listener can clearly hear the improvement. The point of diminishing returns relates to the size of the audience, but in smaller spaces, it lands between 10 and 20, given the proper speaker layout.
So my response to the OP and thread title would be, yes, there is a meaningful limit, it changes with technology and application, but relates more to channel count than bit rate per channel. And, since we're in Head-Fi, and stuck at two channels, it would seem we've past the meaningful limit already.