1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.

    Dismiss Notice

iPhone AAC vs. Aptx and Aptx-hd real world

Discussion in 'Sound Science' started by neil74, Oct 4, 2017.
First
 
Back
1 2 3 4 5 6
8 9 10 11 12 13 14 15 16 17
Next
 
Last
  1. LajostheHun
    I don't think that APTX HD is fixed at 24/48, and also you can change that in the developer section in real time while you using BT.
     
  2. inspectah_deck
    Yes, I know.
    The question is, what does Androids audio mixer do in the middle?
    It has to mix the music from the app, but also other apps and all system sounds like notifications, incoming calls etc into one stream, that is then processed by aptX HD and pushed to the headphones.
    The chain should be music player app -> Android sound mixer -> aptX HD.

    So when I change the sample rate to 44.1 kHz in the music app and in the bluetooth dev options, but Android uses a native sample rate of 48kHz (checkable with apps like Audio Buffer Size from the PlayStore) it goes like this:
    44.1 -> 48 -> 44.1

    I don´t know if it´s possible to circumvent the Android audio mixer when using bluetooth.
     
  3. LajostheHun
    Not without rooting.
     
  4. inspectah_deck
    My phone is rooted, can you share a link?
     
  5. PiSkyHiFi
    AptX HD is capable of being switched down from 48/24 to 44.1/24, I know that much because Neutron player can target the output to AptX HD according to the source sample rate and send it compressed at 576 Kbps, I see the Bluetooth sample rate as 44.1/24 most of the time, thanks to Radsones ES100 App, I get this feedback for their device.

    AAC is a great codec, much better quality algorithm than AptX, unfortunately AptX has been overhyped and is quite clearly a low efficiency codec compared to AAC because it is mostly designed for low latency and cheap hardware.

    That being said, AptX HD is a different kettle of fish, it is superior to to Bluetooth AAC not because of the algorithms, but just because of 24 bit dynamic range in encoding/decoding and the data rate is 576 Kbps, which for a dynamic range that large, ensures that the quality improvement is almost linear over AptX at 352 Kbps.

    That also puts it above AAC at 256Kbps 16 bit too, I couldn't hear the difference on the ES100, but honestly I think that's down to the DAC and AMP on that device being pretty good, but not good enough to reveal the details necessary to pick the difference (it's a great portable device really)

    Not many people here have mentioned the chain they using to try to discern the differences between codecs. I think that would be essential since I wasn't able to hear compression artifacts until my equipment chain was able to reveal it.

    The first time I heard the difference between AAC at 320Kbps fixed rate and FLAC, was when I combined a decent DAC (ESS 9018K2M in an M8 desktop DAC) with a decent desktop Amp (Xduoo TA-02 with S/N 110dB) and a very good good pair of cans (Beyerdynamic T1)
    It was extremely subtle, but the differences were in the interpretation of shape of atmosphere in the source - if the source could reveal a rough room size and shape, then the compression changed that feeling noticeably for me - in recordings that were good enough.
    Spatial positioning was also affected, but I wouldn't get this right every time, it was very subtle.

    I'm quite happy with AptX HD, as it finally has enough data rate to mean that the only way I could possibly tell the difference between this and uncompressed would be if I spent $10,000 on equipment to reproduce an analog stage that would reveal such tiny differences, even then, I might not pick it at all.

    AAC is a great codec, but allowing for an efficiency improvement of maybe up to 20% to 30% over AptX at the same data rate, AptX HD is maybe around 30% more accurate and also low latency.

    What happens to MP3s over Bluetooth ? They get worse than they are of course, no matter what codec, another reason to favor a bit of headroom in lossy quality.
     
  6. Monstieur
    Dynamic range is not applicable to lossy codecs like AAC. The receiver can decode to whatever bit-depth PCM it chooses including 32-bit floating point, and for playback 16-bit is more than sufficient. AAC is transparent at 256 Kb/s and no meaningful improvement can be made other than lowering the latency.

    Given the above, aptX HD is nothing but a gimmick to extract royalties on a solution for problem that doesn't exist. It would have been an improvement if aptX HD mandated aptx Low Latency on devices, but it doesn't. It's still unusable for games / movies due to high latency.

    Your ancedotes strongly look like placebo, or your testing method was flawed.
     
    Last edited: Jun 17, 2018
  7. PiSkyHiFi
    If you're having difficulties with the math, I can help, just read my post again and do the math.
     
  8. Monstieur
    Your calculations aren't valid. 24-bit is not better than 16-bit, and the concept is not even used in lossy compression so the bit rate has no effect on it. Resampling 48 kHz to 44.1 kHz is also imperceptible as they are both above the Nyquist frequency for audio.
     
    Last edited: Jun 17, 2018
  9. PiSkyHiFi
    I think you just shot yourself in the foot, 24 bit isn't better than 16 bit?? - especially regarding lossy at these rates - you know, the conversion of source to frequency domain with floating point - compress, then decompress back to discrete , this is just an absurd statement.

    Let me guess, you're scared of higher bandwidth too.
     
  10. Brooko Contributor
    Suggest you might want to do some reading about bit depth yourself - and I mean this trying to be helpful rather than condescending.

    This is a good starting point : https://www.head-fi.org/threads/24bit-vs-16bit-the-myth-exploded.415361/

    What you're essentially talking about is dynamic range - and 16 bit (for playback) already captures everything we can possibly hear. For recording its a different story - and that is all about noise floor.

    As to your other comments earlier about compression - it doesn't affect perception of sound stage (common fallacy). It would result in artifacts if audible.
     
  11. Monstieur
    There is no "bit-depth" in AAC or MP3. The source PCM is read at whatever bit-depth it is (16/24/32-bit) and compressed. Bit-depth is not applicable to the compressed data at all. The decoder can then decode it to whatever bit-depth it chooses (including 24/32-bit), but there is no sense in decoding to anything higher than 16-bit for playback.

    If you want to preserve the bit-depth of the output samples accurately, you should not use a lossy codec.
     
    Last edited: Jun 17, 2018
  12. PiSkyHiFi
    Completely condescending and inaccurate.... even the stuff about dithering adding random noise is bs - dithering can be done a number of ways, random is not the best algorithm, cautious error diffusion is better and it should be the final step, so, correct the opinion on head-fi to reflect the math and then I'll pay attention to the rest of it.

    Can I hear the benefits of 24 bit over 16 bit ? not easily and mostly no.... but just as you said yourself that you use 24 bit when mixing, you should also use higher than 16 bit so for the same reasons whenever conversion of any kind takes place, like in a lossy transmission protocol for example (!?)

    I'm happy with Red book 44.1/16 - I can't ABX with anything better, but this is lossy still and until Bluetooth bandwidth can accommodate lossless with room for signal strength dips, then AptX HD is most definitely a step up.

    Absolutely, like 1+1 is 2.
     
  13. Monstieur
    16-bit does not even require dithering - that's how high its dynamic range is.

    You keep going back to "bit-depth" and lossy compression. There is no bit-depth in lossy compression. Regardless of what the source PCM bit-depth was, the compressed data can be considered floating point. You can decompress it to whatever bit-depth you want. There is no sense in decompressing it back to the same bit-depth as the source because it's not lossless and the amplitude of the samples would have changed. The "accuracy" of the samples has already been lost due to lossy compression. As for dynamic range, 16-bit is sufficient for playback.

    You cannot preserve bit-depth before and after lossy compression. Even if the output is 32-bit floating point, the samples are less accurate than the input.
     
    Last edited: Jun 17, 2018
  14. Monstieur
    In these conversions (manipulating audio in a DAW), the gloal is to preserve the accuracy of the samples. A higher bit depth increases the accuracy of the transformations.

    In lossy compression, you're throwing away the bit-depth completely. It's completely different from applying filters in a DAW.
     
  15. PiSkyHiFi
    Utterly absurd.
    Let's suppose you have a 44.1/16 signal to start with.
    You don't throw away anything by converting to FP and then doing a frequency analysis - if you stop right there, it's completely reversible - best use 64 bit FP to make sure. math.

    Once we apply compression, every slight increase in data rate, improves the likelihood of being able to represent the original data more faithfully until it does represent the data or reaches a codec limit. Math.

    I found this absurd point in another thread about bandwidth and representation accuracy.

    What should I do from here? point out that you can't decrease accuracy of representation by increasing bandwidth ?

    I don't why you can't see it honestly.
     
First
 
Back
1 2 3 4 5 6
8 9 10 11 12 13 14 15 16 17
Next
 
Last

Share This Page