Do you mean that SQ LDAC = SBC ?
Bluetooth codecs
SBC |
| 16 bit | 16 kHz, 32 kHz, 44.1 kHz, 48 kHz | 48.0 kHz | 320 kbps | 2003 | ~ 200-300 ms |
LDAC | | 24 bit | 44.1 kHz, 48 kHz, 96 kHz | 96.0 kHz | 990 kbps | 2015 | ~ 200-400 ms |
Do the math, it’s very simple: 24 x 96,000, then multiply that by 2 because it’s stereo and then divide by 1,024 to get the kbps, the result is 4,500kbps but with LDAC you’ve got a maximum of 990kbps, so obviously it cannot support 24/96 audio. LDAC cannot even support 16/44 because 16 x 44,100 x 2 / 1024 = 1,378kbps. So both SBC and LDAC have to apply a lossy codec and as audible transparency occurs with lossy codecs at rates lower than 320kbps, then audibly LDAC = SBC. This obviously assumes the bitrate of SBC is not falling too far below 320kbps.
I know I am not an audio engineer lol. But damn I was shocked, my hearing isn't as great as others on this site
No, you’re hearing probably isn’t significantly worse than others on this site and audio engineers generally don’t have particularly good hearing. Your listening skills are probably significantly worse than a music/sound engineer but even having that level of listening skills still won’t help, codecs at higher bit rates are audibly indistinguishable by engineers as well!
Many audiophiles claim they can hear the difference between lossless and MP3s at 320kbps but as with so many other things (cables, etc.), give them a controlled listening test and they can’t, just like everyone else! Obviously, the codecs have improved over the last 30 years but for more than a decade, audible transparency occurs at about 170kbps and even 128kbps is audibly transparent with quite a high proportion of recordings.
The dialectic method I'm advocating here is to isolate and learn to identify specific artifacts in isolation, then learn how these artifacts sound when combined together. Music complicates this initial phase because music has varying rates of intentional distortion amd harmonics combined with artifacts that shouldn't be there.
Generally that is a good approach and pretty much every hearing threshold I can think of, off the top of my head, is more sensitive to test signals than music, because obviously we can design the test signal to occupy the most sensitive hearing range, while also isolating and maximising the specific artefact/threshold being investigated but in most cases, even sometimes with formal scientific studies, it is wise to also test with music recordings, as we can choose recordings which exhibit that artefact to a greater or lesser extent and provide more data.
However, this isn’t really the case with lossy codecs because what we’re actually testing is the efficacy of a complex set of algorithms applying “perceptual models”. Eg. Splitting up the signal into a number of bands, analysing the content in each band and reducing the number of bits each band requires by eliminating freqs we wouldn’t be capable of hearing due to “auditory masking” and other hearing limitations. To test this process therefore requires complex signals, covering the entire freq spectrum of human hearing and providing a diverse range of scenarios. So your choice in this instance is either to spend many years designing a set of hundreds/thousands of diverse complex test signals or simply choose parts of the millions of commercial audio recordings available. Of course, you should try out some simple test tones/signals to satisfy any personal doubts.
G