Separate names with a comma.
I personally have no issues with sbc on my Nexus 6. I've tried several Bluetooth amps and all sound good. I can't really hear a diff between Tidal hifi and high though.
it's certainly awesome to hear you're loving the Bose on SBC. Should go without saying it's all about the implementation (see my review of the AK XB10 for an example of fantastic implementation), perhaps Bose nailed it on the 35, I've never heard one. For comparison's sake, have you heard a JBL E40 on bluetooth? It sounds good enough for me to not have any specific complaints, and I'll use it when a cord can actually be dangerous, but I wouldn't say it sounds great. Good though, for sure.
Nope, I haven't heard those JBL's. To my pleasant unexpected surprise, the Bose QC35 sounds more than good enough, they actually sound very good.
Maybe HeadFi's search engine is not 100% but the above post would seem to be the only mention of AptX HD on this thread...
In any event, I'm quite excited about the potential of AptX HD, albeit, there are only a handful of devices that are currently making use of the codec. LG's G5 is maybe the only smartphone that is Apt-X HD capable and less than a handful of headphones...
But nevertheless I am optimistic. I'm off to a good start despite the limited HW with A&K's XB10 BT headphone "dongle" as well as an AK3xx DAP which have recently been upgraded (updated...) from AptX to AptX HD. I'm hoping to do away with the dongle if/when eventually more AptX-HD headphones hit the market. B&W's P7 or V-moda's Crossfade Wireless could perhaps benefit with the additional codec.
A&K made a simple but nice comparison table between SBC / AptX & AptX HD and from this, the most interesting aspect is the compression ratio (SBC -> 1/20, AptX & AptX HD -> 1/4). The next note-worthy point is the transfer rate (SBC@328kbps < AptX@384kbps < AptX HD@576kbps).
Unfortunately I have less understanding of AAC / AAC+ but these do not appear to present higher rates than AptX / HD... . Of course I am more than willing to hear from those that prefer AAC codecs to better understand what the advantages, if any, are.
I don't buy into 24 bit as one can't find any recordings that exceed 16 bits, or come close to 16 bit DR. One would be hard pressed to find a DAC implementation that can deliver 24 bits of DR, perhaps 21 or 22 bits is more like it. So IMO the HD thing may be a bit over rated. Next what is the spec for SBC compression at Bluetooth 4.0 or 4.1 under good conditions and how does EDR come into play?
Sorry if I misunderstood but wouldn't most of the 80's and beyond be mastered on tape rather than digital? Therefore those could still be ADC-ed to beyond 16 bit as such the likes of HDTracks and DSD/SACD primarily of old records?
Anything can be encoded with big numbers doesn't mean it will do any good. Analog recordings are below 16bit "resolution" so why bother to throw them into big digital buckets? Like Stan says there is no recording out there that even approach 16bit's DR, using more bit depths will just increase the file size nothing more.
it's like scanning old camera films in a way. we get good scan ability, but what we record is the grain on the old films and whatever effect time had on them. for old tapes, there are pretty much 2 situations and usually a mix of both:
-the tapes were copied often enough to avoid having it degrade too much with the years. from generation loss you kill close to a bit each time with noise(anybody knows for sure with tape to tape copies how much we lose?).
-old tapes have been stored for many years and the possible damages probably more than surpass a few bit loss.
so it's not easy to get great old stuff in practice (sadly). and that's not accounting for remastering that some do every X years purely to renew copyrights.
Most modern studio recordings are saddled with compressors and limiters. Analog kit has noise, analog tapes have noise, live situations have gobs of ambient noise, noise is everywhere. How many recordings of music can be identified that have such a large DR, recordings that people care to listen to? The old stuff has gobs of noise baked into the recordings, so I wouldn't get my hopes up too high.
I don't quite understand what you mean by no "24-bit DR -capable DAC". There are many DACs out there that (at least claim) 24bit or higher. The specs on my Mojo go to 32 bit...
As for ADCs... I'm no expert and can only ask why some sites / studios / ... claim 24~32-bit ADC processing?
For instance, I have a copy of a test recording made by JAPRS and the recording conditions are documented here. Most notable (to an amateur like me):
"...files were prepared by capturing an identical analog output signal of API console of CR-506 studio of NHK (Nippon Hoso Kyokai: Japan Broadcasting Corporation) with six DAWs, Avid ProTools for PCM 48kHz, 96kHz, 192 kHz and Merging Pyramix/Horus for 384kHz/32bit, DSD 5.6MHz, DSD 11.2MHz simultaneously..."
Suggests 32-bit 192KHz
and the other goodies listed are beyond my comprehension but FWIW:
But anyway, I have no issue stating this stuff is way over my head. My only interest is getting the best possible sound from currently available (and affordable) technology.
If 24-bit (re)production means nothing more than better imaging (if not increasing DR), it's still worth checking out imho...
In terms of real-world media availability, ok, that's an (even steeper) uphill battle. For example I purchased Davis' Kind of Blue in 24b-96KHz format without even having any background of where it was sourced (assumption being of course the original analog masters) or the equipment / studio used to convert it. They sound better than my own CD rips (to uncompressed WAV files) but that's no proof of hi-res' authority because maybe my CD was made with a low-quality ADC etc. I think the above JAPRS "exercise" is as close as anyone will get to a fair comparison.
there is a difference between the ability to generate 24bit files and actually resolving 24bit. also in the studio they usually don't sing at 144db+whatever the noise level is in the room, so the album can have 24bit of actual dynamic ^_^.
the best we can do is in effect pretty low. and let's not forget the headphone/speakers that really aren't highres compliant. if transducers were significantly better, I'm guessing we might have less of a hard time passing blind tests with some of the lossy codecs like those used for BT.
A true playback of a 24 bit sound above the ambient noise level would be harmful to a human being. 32 bits might cause a geological event. The noise level in your kit is not compatible with a 24 bit DR, so 32 bit is not in the cards. And of course where are you going to find recordings that you want to listen to that can even challenge a 16 bit DR? Be mindful of the marketing department, that have been taken over by the dark side of the force.
24bit does not mean better imaging.
That idea doesn't even make sense.
Thanks for your feedback gents. clearly I've got the wrong idea or at best I've over-simplified things. I picture sound as waveform that has to be digitally represented by "slices" and the more slices, the closer we get to reproducing that waveform more accurately.
So without turning this into a futile attempt to convert me into a Sound-Science groupie what level of sampling and resolution does it take to avoid audible distortion / noise (quantization) while representing the full audible range along with what some audiophiles denote as less tangible qualities like the "air" around the instruments etc.? I'm just asking for a set of numbers and not the math behind the Nyquist-Shannon theorem