Quote:
Originally Posted by HeadLover /img/forum/go_quote.gif
Say, I have heard many times, even by some "Gurus" that doing music about the "Loudness War", and they say that when we all be moving to 24/96 it won't be a problem cause there will be much more "space" for the dynamic, so even if the sound it more compress, it still be good.
What do you think about it? and that claim ?
|
I'm afraid you've got this the wrong way around. The loudness war is caused by over-compression or limiting. BTW, "compression" as an audio term is not related to data compression but to the reduction of the dynamic range. Limiting is just an extreme version of compression. Compression works by reducing the peaks in the waveform, allowing the overall level to be raised. In some genres of music the dynamic range is reduced to just a few decibels. As bit depth only encodes dynamic range, the more compressed, the fewer digital bits are required. So, if you've got a piece of music with quite a wide dynamic range, let's say 36dB, 6 bits are required to encode it. This means that on a CD you would be getting 6bits worth of music and 10bits worth of noise. With 24bit you are still only going to get 6bits of music but now you have 18bits of noise. In other words, using hi-rez 24/96 will not make the slightest difference whatsoever.
As I mentioned in an earlier post, compression is an invaluable tool during production. It is the over use of compression which is the problem. This loudness war is not related to the format. In fact, if digital had not been invented and we all still used vinyl, we would likely still be the same position with the loudness war as we are now.
Quote:
Originally Posted by leeperry /img/forum/go_quote.gif
all they say is that an higher fidelity can be achieved by increasing either the sampling freq, the resolution or both.
|
This statement is true, up to a point. 44.1kFs/s is definitely better than 22kFs/s and 16bit is definitely better than 8bit. But once we get to 16bit 44.1kFs/s, we have reached the limits of analogue equipment to reproduce it and the limits of a human being to hear it. If you see anyone (or any company) claiming that 24/96 is definitely an improvement in quality or fidelity, that should set your alarm bells ringing that the company either does not understand how digital audio works or is deliberately trying to mislead you. As I mentioned before, there is no reliable proof that anyone can tell the difference between 44.1/16 and 24/96, anywhere near normal listening levels. So, at best, companies are making claims which they cannot prove.
You have to realise, for audio equipment manufacturers and retailers, 24/96 provides the best marketing opportunity for years. It is not easy to continue to sell the same old specification equipment. Products that have been on the market for a while eventually get discounted and the profit margins are lower. It's much better and easier if you've got a new standard and therefore a whole new range of equipment you can sell at a higher price with a bigger profit margin. The fact that this new standard does not provide the slightest improvement is not really important. The retailers just want to sell more units for higher prices and if they can entice existing costomers (to the new standard) as well as new ones, how much better can it get? This cycle has pretty much been the trend over the whole history of consumer audio equipment. In some areas of consumer audio, escalating claims and prices has been going on for so long that the claims and cost of some equipment has completely lost touch with reality.
I only really see this problem getting worse. Digital Audio Technology has already reached (and exceeded) the limits of human perception, so all that is left for the manufacturers and retailers is to continue developing products which exceed these limits by ever more ridiculous and superflous amounts.
G