CHansen
New Head-Fier
- Joined
- Jul 15, 2010
- Posts
- 28
- Likes
- 16
Quote:
Dear 51-50,
Yes, that is exactly the case. There was a period when playback DAC chips could only reach 16 bits. Then I believe it was Denon that released the first 18-bit chip. This started a numbers race. Within only a few years, EVERY DAC was claimed to be at least 24 bit, even the very least expensive and lowest actual performance chips. There was a "numbers" race that lasted only for a few years. For the last ten years or so, one cannot find an audio chip that is not rated at least at 24 bits of resolution.
Today a new "numbers" race is starting, and many DAC chips are claiming that they will produce "32-bit resolution". This is fairly silly in one aspect. 32 bits of resolution would reach a dynamic range of over 192 dB!!! This is absolutely not possible. But on the other hand it also means that it can accept up 32 bits of data from the oversampling digital filter. It is possible that this would sound very slightly better than the way that the same data is presented to a 24-bit DAC. The length of the data exceeds 24 bits, so it must be reduced to the correct length either by truncating, rounding, or dithering. All of these have some audible disadvantages, so it may sound better to reduce it to 32 bits, whereas the audible disadvantages of reducing the sample length from the digital filter all the way to 24 bits may have an audible disadvantage.
In addition, the processing inside the software player is done with bytes. One byte equals 8 bits. So it requires a lot of extra work to display the number of bit is increments less than 8 bit increments.
Hope this helps,
Charles Hansen
Ayre Acoustics, Inc.
www.ayre.com
When using the foobar script [%__bitspersample% bits] and playing HDCD encoded 16 bit files or the HDCD CD itself, 24 bits is displayed in the status bar (for me anyway). Do you think this is because of that 1 bits worth of extra range and foobar shows 24bit because it is not capable of displaying 17 or 20, etc?
Dear 51-50,
Yes, that is exactly the case. There was a period when playback DAC chips could only reach 16 bits. Then I believe it was Denon that released the first 18-bit chip. This started a numbers race. Within only a few years, EVERY DAC was claimed to be at least 24 bit, even the very least expensive and lowest actual performance chips. There was a "numbers" race that lasted only for a few years. For the last ten years or so, one cannot find an audio chip that is not rated at least at 24 bits of resolution.
Today a new "numbers" race is starting, and many DAC chips are claiming that they will produce "32-bit resolution". This is fairly silly in one aspect. 32 bits of resolution would reach a dynamic range of over 192 dB!!! This is absolutely not possible. But on the other hand it also means that it can accept up 32 bits of data from the oversampling digital filter. It is possible that this would sound very slightly better than the way that the same data is presented to a 24-bit DAC. The length of the data exceeds 24 bits, so it must be reduced to the correct length either by truncating, rounding, or dithering. All of these have some audible disadvantages, so it may sound better to reduce it to 32 bits, whereas the audible disadvantages of reducing the sample length from the digital filter all the way to 24 bits may have an audible disadvantage.
In addition, the processing inside the software player is done with bytes. One byte equals 8 bits. So it requires a lot of extra work to display the number of bit is increments less than 8 bit increments.
Hope this helps,
Charles Hansen
Ayre Acoustics, Inc.
www.ayre.com