Why 24 bit audio and anything over 48k is not only worthless, but bad for music.
Dec 3, 2015 at 9:41 AM Post #1,561 of 3,525
  (My personal quibble with their test is that the performance of DACs varies considerably, and has progressed since 2007, so I'm not convinced that the disc players they used as sources were "nominally good enough" to be ruled out as a limiting factor. Perhaps, if the content they used were played back on a higher-quality DAC, there would have been details present which would have been audibly lost later in the signal chain. Likewise, while the Quads and Snells are what I would consider to be "very good speakers", I can't rule out the possibility that some of the many other speakers out there might do a better job of making some difference audible. I'm also inclined to feel that, when listening for subtle details, headphones do a better job of revealing differences than loudspeakers - yet they failed to include any headphone listening. This seems like a significant omission - and one that would have been easy to remedy. It occurs to me that they - quite reasonably - were only trying to prove the "typical case" - which they did pretty well.)  
 

 
DACs used in consumer digital player have been tested with DBTs and shown to be sonically transparent ever since the second generation of CD players in the mid-1980s:
 
  "Masters, Ian G. and Clark, D. L., "Do All CD Players Sound the Same?", Stereo Review, pp.50-57 (January 1986)
 
Thus the argument that DAC technical progress since 2007 modifies the relevance of this article: "Audibility of a CD-Standard A/D/A Loop Inserted into High-Resolution Audio Playback". E. Brad Meyer and David R. Moran. JAES 55(9) September 2007 is false, and was probably made based on an incomplete understanding of the performance of modern digital audio gear in general.
 
Dec 3, 2015 at 9:45 AM Post #1,562 of 3,525
the comments section discussion of ABX in your link earlier was odd
 
while not having explored the experimental design literature to that level of detail, in my own use of foorbar2000 ABX plugin it seemed natural to switch A/B to try to learn to discriminate, then A/X, B/X trying to decide same/different, often returning to A/B over several cycles when the difference wasn't obvious
 
 
another fun point is how audiophile gurus disagree - Schiit's "megaburrito filter" from the few hints given uses a narrower transition band, contrary to many other's recommendations - but we are assured that it is the latest, greatest
 
Dec 3, 2015 at 9:54 AM Post #1,563 of 3,525
  the comments section discussion of ABX in your link earlier was odd
 
while not having explored the experimental design literature to that level of detail, in my own use of foorbar2000 ABX plugin it seemed natural to switch A/B to try to learn to discriminate, then A/X, B/X trying to decide same/different, often returning to A/B over several cycles when the difference wasn't obvious
 
 
another fun point is how audiophile gurus disagree - Schiit's "megaburrito filter" from the few hints given uses a narrower transition band, contrary to many other's recommendations - but we are assured that it is the latest, greatest

 
Without some more complete references, I can't tell what the above means, or even be sure that it was directed to me.
 
Dec 3, 2015 at 10:06 AM Post #1,564 of 3,525
sorry should have been more specific:
Quote:
...Fact is that people have tried to do DBTs illutsrating that in accordance with their vision of science, and the results have fallen well short of the sort of technical success that a good commercial venture requires. Here is a recent example and the debate that it stimulated: The Meridian typical DAC boondoggle
 

reading the comments to the JAES article
 
Dec 3, 2015 at 10:10 AM Post #1,565 of 3,525
   
They really shouldn't be the future, because bits and samples aren't what's wrong with audio today. A return to reasonable mastering and a societal move back towards considering music (and performing arts other than movie acting) important would do more for sound than anything hi-res can offer.

 
I agree with you, but there are several "practical" reasons why it won't happen that way:
 
1) High-res files have a lot of "popular acceptance" - meaning that a lot of people are in fact willing to buy one more remaster of an album they already have if it happens to be in high-res.
 
2) At the very minimum, it will reach a point where stores will be proudly proclaiming that "we'll give you the high-res version for the same price as our competitor charges for an ordinary CD quality file"; and consumers always jump at the idea that they're getting something better for the same price - or even just a little bit more. (It's just another version of "new and improved".)
 
3) If it's from an old master, then re-mastering an album actually requires significant effort and cost; re-converting the same analog master tape at a higher bit rate is much easier. And, if the master is digital, then it was almost certainly done at a higher bit rate to begin with, so converting it to a higher rate for delivery, or even simply not converting it and selling copies of the master directly, is also trivial.
 
4) Many people maintain that the main target audience for music these days actually LIKES music that's overcompressed and sounds poorly mastered - because it sounds better on cheap $10 ear buds and car radios. Because of this, it's possible that we've lost all hope of "the regular version" of anything sounding good.... and, if so, the best we can hope for is that they'll sell us a separate (and more expensive) "audiophile copy" that sounds better.... and, if they do that, then you can bet that it will ALSO be high-res, because that counts as a selling point.
 
If you look at the high-res albums sold on any popular store - like HDTracks - you'll find that some of them are in fact totally remastered, and many of those sound very good. (I think the Grateful Dead Studio Abum set sounds very good - and quite different from the original. There's also a long description of all the things that were done to "restore, repair, and remaster" the original mix tapes.) What I would be interested to see would be a statistic showing whether albums that are in fact "seriously remastered", and so sound quite different, actually SELL proportionally more copies than ones where the high-res version is indistinguishable from the original. That would give a rough estimate of how many people actually buy the high-res version of an album they already have - because they at least expect an improvement; and how many are simply buying a new album, and choose the high-res version over the regular version because it only costs a little more, and it's "the premium version", but they're not specifically looking for an upgrade from a copy they already have. (It's a well-known marketing fact, for everything from dish detergent to sports cars, that, if you sell a "regular" and an "extra strength" version of a product, or a "regular" and an "XL" model, many people will choose the more expensive "top" or "middle" version, even if they have no specific reason to think they need it or it will work better for them..... just because it seems like it "must" be better than the "regular" version somehow. Therefore, it always makes sense to offer a "basic" version, and a "premium" version.) 
 
Dec 3, 2015 at 10:57 AM Post #1,566 of 3,525
Under what conditions?
 
That test showed that several source samples, played on certain disc players, through certain amplifiers and speakers, weren't audibly changed "when they were passed through an additional CD quality audio loop". The problem there is that you can't generalize those results to "everything, everywhere, for everyone". Perhaps there is some difference that's obvious on 10% of the speakers in the world, and totally inaudible on the other 90%, and none of the speakers they chose happen to fall in that critical 10%. Perhaps there's a difference that's easily audible, but only on an acoustic recording of some certain instrument, which wasn't included in their test sample. Or perhaps there's some specific sound that can occur naturally in music, and that some DACs can faithfully reproduce, but that the DACs in the specific disc player they chose to use cannot - in which, if it's already missing or altered by their player, you aren't likely to notice further alteration further down the signal chain. 
 
My point is simply that testing a few dozen samples, on three or four players, with fifty or a hundred test subjects, isn't sufficient information to make a generalization. I also simply can't agree with you that "every DAC made since the 1980's is audibly transparent" - because I own quite a few DACs, and several of them sound significantly different than others. In fact, a few of them have multiple filter settings, and even those sound slightly different.
 
I personally suspect that many of the claims made for "things being inaudible" may be over-generalizations.
 
For example, I've heard, over and over, that "THD less than 0.5% is inaudible". However, one day I created a test tone by starting with a steady 50 Hz tone, and adding to it a 2 kHz tone (that's the 40th harmonic of 50 hz) switching on and off at quarter-second intervals. The level of the 2 kHz "beep" was reduced enough that it was 0.1% of the amplitude of the 50 Hz primary tone. Guess what..... In that specific case, the 0.1% THD was CLEARLY audible as a "beep beep beep" sound ...... so I guess 0.1% THD isn't ALWAYS inaudible.
 
(The statement that "if the THD of an amplifier is below about .1% it won't be audible" is probably a fair generalization if you limit your "world" to analog linear amplifiers, where it is most unlikely for an amplifier to produce random high-order harmonics without producing much higher levels of low-order harmonics. But it may not be equally valid with digital amplifiers, where some sort of "processing error" might in fact produce just the situation I created with my test file.)
 
As for the test we're discussing.... We KNOW that there will be measurable differences between SACD files and CD quality files (the SACD files will have a wider frequency response, with a noise floor that rises sharply at ultrasonic frequencies, and a different noise spectrum). So, to be reasonable, if you want to test whether those differences are audible, the first thing you must do is to confirm that your test equipment is capable of presenting them to the test subjects to be heard (or not). And, to put it bluntly, saying that "you used high-end consumer equipment with good specs" doesn't rise to the level of "validating your test equipment". You need to show both that the entire signal chain you used was able to reproduce those differences; and you ALSO need to show that the sample content you used contains them. (If you want to see how well test subjects can distinguish shades of red, then you need to use video equipment that can do so for the tests, and you need to use test patterns that contain the proper test signal.)
 
 
Quote:
   
DACs used in consumer digital player have been tested with DBTs and shown to be sonically transparent ever since the second generation of CD players in the mid-1980s:
 
  "Masters, Ian G. and Clark, D. L., "Do All CD Players Sound the Same?", Stereo Review, pp.50-57 (January 1986)
 
Thus the argument that DAC technical progress since 2007 modifies the relevance of this article: "Audibility of a CD-Standard A/D/A Loop Inserted into High-Resolution Audio Playback". E. Brad Meyer and David R. Moran. JAES 55(9) September 2007 is false, and was probably made based on an incomplete understanding of the performance of modern digital audio gear in general.

 
Dec 3, 2015 at 12:11 PM Post #1,567 of 3,525
I've designed many test protocols for situations other than audio, and, especially if you're trying to determine the smallest difference or value the subject can distinguish (maximum sensitivity), it's almost always best to make the test as simple as possible - and to avoid all extraneous activity or thought. For example, if you want to test how well your subjects can distinguish differences in colors, you make colored tiles of different colors, hold them up in pairs next to each other, and simply ask the subject "Do they look like the same color?" (You don't hold up three tiles and ask them which ones look more like which other ones; that would be testing a more complex "function".)
 
With this in mind, if you're simply testing whether something is "audibly different or not", then any form of "full ABX testing" is needlessly complicated.
 
Here's how I would do the test.....
 
First, again to simplify, let's simply refer to our signals as "Reference" and "X".
The subject will have a simple way of selecting either the Reference signal or the X signal to listen to.
(It could be a toggle switch, labelled "Reference" and "X", or a pushbutton that toggles between the choices and an indicator light showing which is currently selected.)
 
The test run will consist of a series of individual tests.
For each test, the test set will be configured so that either X is a copy of the Reference signal, or X is the modified signal it's being compared to.
The test subject will then be allowed to play the test sample, switching between the Reference and X signals as quickly as they like, and as often as they like.
When they have decided whether they think that the X signal is or is not the same as the Reference signal they will report their choice.
(If they aren't sure they should be asked to guess. I'm pretty sure that most subjects will find guessing "yes or no" to be less stressful, and to require less thought, than guessing "which something is most like" if they're uncertain. In order to compare the results we get to those expected by simple guessing, we do require each test subject to complete all tests, and answer all of them, so we want to make that as easy as possible for the subject to do.)
 
By doing it this way we have minimized the requirement for any sort of memory, or for any "cognitive load" associated with deciding upon matches.
We have simply made the question "Did the sound change when you flipped the switch or not?"
 
Note that there SHOULD be a very slight audible tick or pause each time the switch is actuated (this will cover up any slight differences between switching between copies of the same sample, and switching between different samples, which might serve as conscious or unconscious cues as to which is occurring).
 
Obviously, if the signals are really audibly identical, then we would expect results consistent with the subject guessing.
And, if they statistically do better than we would expect from guessing, then that suggests that there are in fact audible differences.
 
Note that this test very specifically determines whether there is ANY audible difference between a Reference signal and a test signal.
It does NOT determine what the difference is, or which signal is better, and does NOT require the test subject to quantify what the difference is.
(This avoids the possibility that the test subject will think they hear a difference, but be "uncomfortable" reporting a difference that they can't quantify.)
(Also, interestingly, it also "covers" situations where the subject may not be consciously aware of the difference, but it may still bias their choice.)
 
Also note that we STILL need to perform the test with a lot of subjects and a variety of equipment. 
If the test shows that audible differences DO exist, then we have shown both that differences exist AND that our test equipment is able to demonstrate those differences.
However, if the test shows no audible difference, we still can't know for sure if the null result is due to limitations in our equipment, test samples, or even our test population.
Therefore, if we get a null result, the test should be repeated many times with different equipment and conditions to rule out that possibility.
(This could be done in the form of a "challenge", with some sort of prize offered as incentive for vendors or individuals to try it with their own chosen equipment and test samples.)
 
Also note that this protocol could be implemented with VERY primitive (and even passive) equipment.
As long as the levels are matched, it doesn't even require computer control or relays.
Whether the test signal for each test is the Reference signal or X could be set using a manual toggle switch.
(A simple computer program could print out a random list of the necessary settings for each individual test.)
(The results should be reasonably valid as long as the test subject can't see the position of the configuration switch.
 However, an automated system would be better, because it would rule out unconscious information leakage from the operator to the test subject.)
 
 
Quote:
   
Without some more complete references, I can't tell what the above means, or even be sure that it was directed to me.

 
  the comments section discussion of ABX in your link earlier was odd
 
while not having explored the experimental design literature to that level of detail, in my own use of foorbar2000 ABX plugin it seemed natural to switch A/B to try to learn to discriminate, then A/X, B/X trying to decide same/different, often returning to A/B over several cycles when the difference wasn't obvious
 
 
another fun point is how audiophile gurus disagree - Schiit's "megaburrito filter" from the few hints given uses a narrower transition band, contrary to many other's recommendations - but we are assured that it is the latest, greatest

 
Dec 3, 2015 at 3:27 PM Post #1,568 of 3,525
Under what conditions? That test showed that several source samples, played on certain disc players, through certain amplifiers and speakers, weren't audibly changed "when they were passed through an additional CD quality audio loop". The problem there is that you can't generalize those results to "everything, everywhere, for everyone". Perhaps there is some difference that's obvious on 10% of the speakers in the world, and totally inaudible on the other 90%, and none of the speakers they chose happen to fall in that critical 10%. Perhaps there's a difference that's easily audible, but only on an acoustic recording of some certain instrument, which wasn't included in their test sample. Or perhaps there's some specific sound that can occur naturally in music, and that some DACs can faithfully reproduce, but that the DACs in the specific disc player they chose to use cannot - in which, if it's already missing or altered by their player, you aren't likely to notice further alteration further down the signal chain.
 

 
Under what conditions? Under every condition that we had the resources to test at the time of the review (1986) to test.  There was no cherry picking of people, equipment or media except for maximum sensitivity to small differences, and players that might be expected to sound different or bad. The test conditions were the most sensitive to small differences that were known to be available at the time. This was rather obviously the same goals  that were followed 19 years later by Meyer and Moran.
 
Let's compare that to the collected writings of hundreds and thousands of audiophiles, including the esteemed KiethEmo. BTW, is KeithEmO short for Kieth@Emotiva (http://emotivalounge.proboards.com/board/57/keiths-corner) ?  Bias, anybody? :wink:
 
In their evaluations, there appear to be few if any digital music players that sound the same.  If almost all digital music players sound different then finding two players that sound different should be very easy. There should be no reason to test a lot of different players or a lot of listeners because just about every pair we would pick will sound different to just about everybody, if we believe that all those audiophile evaluations were valid.
 
The debating trick that is being employed is an old one - change the question so as to make things as unfairly difficult as possible for your opponent. You have a track record of claiming without actual proof or evidence except your say so that every CD player can be reasonably be expected to sound different to every reasonable audiophile, while demanding that I actually test every one or a statistically significant sample of them, live and in person.
 
If your assertions are correct, then I should be able to do very little testing, and if I find no audible differences demonstrate that your hypergenerality is seriously flawed and make my point. This has been done many times in private and no differences were found. It was felt that the well publicized tests in 1986 and 2007 that showed the same results should suffice.
 
Dec 3, 2015 at 3:29 PM Post #1,569 of 3,525
   
I've designed many test protocols for situations other than audio, and, especially if you're trying to determine the smallest difference or value the subject can distinguish (maximum sensitivity), it's almost always best to make the test as simple as possible - and to avoid all extraneous activity or thought. For example, if you want to test how well your subjects can distinguish differences in colors, you make colored tiles of different colors, hold them up in pairs next to each other, and simply ask the subject "Do they look like the same color?" (You don't hold up three tiles and ask them which ones look more like which other ones; that would be testing a more complex "function".) With this in mind, if you're simply testing whether something is "audibly different or not", then any form of "full ABX testing" is needlessly complicated.

 
Unlike you Kieth my designing of test protocols which is very extensive, includes audio testing. Furthermore in the process of developing audio test protocols I devised ABX which is the surely the most discussed testing procedure in the history of audio.
 
The idea that if you're trying to determine the smallest difference or value the subject can distinguish (maximum sensitivity), it's almost always best to make the test as simple as possible - and to avoid all extraneous activity or thought, is exactly where we started over 30 years ago.
 
Only this just wasn't just me, it was a team that was composed of a number of engineers and scientists, BS's and PhD's, including people who were professionally engaged in scientific research.
 
Our work was reviewed and enthusiastically approved by the well known research team from the nearby University of Waterloo, Vanderkooy and Lip****z of AES fame. They saw our ABX box and ran right out and built their own. The rest is history, whether you know it or not. :wink:
 
As you suggest we started with the classic same/different test and then we actually did what it took to make it effective for audio. This was no 2015 retroactive thought experiment simplified for a forum post, this was the real thing done with real high level techies, real equipment UUTs, and real audio systems (many more than just one of each!).  
 
ABX was the simplest solution that worked with convincing levels of sensitivity.
 
Dec 3, 2015 at 5:24 PM Post #1,570 of 3,525
In order.....
 
1) They apparently chose three disc players for the test, then almost immediately disqualified one because it made an obviously odd noise at one point, and settled on one of the other two. The speakers they chose were two models that were accepted at the time by most people to be "high end audiophile models". I didn't read about any testing done to determine that they were in fact "the most sensitive equipment to small changes available at the time", nor have I seen any data to suggest either why they should "expect that the equipment shouldn't sound bad or different", or to back up that expectation with actual test data. They didn't test "every high-end disc player available" nor "every audiophile speaker". In short, I see a massive collection of assumptions, and an equally large collection of unknowns. If their goal was to prove that the difference was inaudible with "some typical audiophile equipment of the time", then they achieved their goal; but I don't know if there might have been an obvious difference if they'd used some other disc player, or a different speaker or amplifier, and neither do you. 
 
2) Yes, that's me... (my name on the Emo forums is actually KeithL)
cool.gif
and it's hardly a secret.... However, since all of our current DACs play both high-res and standard files just fine, I don't see any particular reason why you would think I'd be biased one way or the other on that one. We don't sell high-res music, and, if you read my posts, you'll find that I'm the last person to suggest that anyone should buy a high-res album release for any reason other than because that particular release happens to sound better - for whatever reason.
  3) As for generalities, the fact is that it's very difficult to make valid generalities - because there are so many variables. This is why most actual scientists avoid generalities unless they have a truly massive amount of consistent evidence to back them up. So, if the purpose of that test was to demonstrate that, with typical "audiophile quality equipment", most people couldn't reliably detect a difference between high-res files and CD quality ones, then I have no problem with that... and I agree that it produced results that tend to support that claim. I also have no problem if you want to state that nobody has proven conclusively that there is an audible difference with modern recordings and equipment. (However, neither those statements, nor the results of any test I've seen, proves conclusively and universally that so such difference, audible to any human, using any currently available equipment, exists.... and I can't even how you could frame a test that would show the same for any possible equipment that might go on sale next year.)
 
4) I never claimed that "every CD player should sound different to every audiophile"; in fact, I've generally avoided making any general claims, because I usually don't have enough data to do so - and I hate to be proven wrong. All I said was that neither you nor anybody else has proven that "all disc players sound audibly identical", or even that most of them do. (I'm even inclined to agree that I personally believe that many of the claims of what people think they hear are based on expectation bias... but that's far from declaring that every single claim that disagrees with what I believe to be the truth "must be false". Even if 90% of them turn out to be wrong, that in no way "proves" that the other 10% aren't right.) You might as well generalize on "how well people can bowl" based on a test of all the members of the Franklin Bowling League, or claim that "humans never live past 110" because nobody you know has a relative that has done so.... which might be very surprising to the occupants of that small town in Russia where people routinely make it past 120.
 
Would the people who took that test in 1986 have heard obvious differences if they'd used Koss electrostatic headphones, or a different brand of speakers, or a different amplifier, or a different DAC? I don't know... and neither do you... because they didn't test those combinations. And, considering the relatively tiny test population of equipment they used I don't see anywhere near enough information to support reliable broad generalizations.
 
Yes, resources are almost always a limiting factor, but that just may mean that the limitation prevents you from collecting enough data to make a viable generalization.
 
And, to be very specific, if they'd first auditioned the top model disc player from the top 20 manufacturers at the time, using a dozen different amplifiers, and the top speaker model from the top twenty speaker vendors, and none of 500 test subjects could detect any difference between any combination of them with their high-res sample files, THEN I would be willing to consider it as a provisional "given" that "all disc players sound the same" and that it was reasonably safe to generalize the results obtained with one of them to the rest. Otherwise, while the test is certainly "interesting", and can reasonably be claimed to support their claims rather than to contradict them, it is hardly "conclusive".... sorry.
 
 
 
Quote:
   
Under what conditions? Under every condition that we had the resources to test at the time of the review (1986) to test.  There was no cherry picking of people, equipment or media except for maximum sensitivity to small differences, and players that might be expected to sound different or bad. The test conditions were the most sensitive to small differences that were known to be available at the time. This was rather obviously the same goals  that were followed 19 years later by Meyer and Moran.
 
Let's compare that to the collected writings of hundreds and thousands of audiophiles, including the esteemed KiethEmo. BTW, is KeithEmO short for Kieth@Emotiva (http://emotivalounge.proboards.com/board/57/keiths-corner) ?  Bias, anybody? :wink:
 
In their evaluations, there appear to be few if any digital music players that sound the same.  If almost all digital music players sound different then finding two players that sound different should be very easy. There should be no reason to test a lot of different players or a lot of listeners because just about every pair we would pick will sound different to just about everybody, if we believe that all those audiophile evaluations were valid.
 
The debating trick that is being employed is an old one - change the question so as to make things as unfairly difficult as possible for your opponent. You have a track record of claiming without actual proof or evidence except your say so that every CD player can be reasonably be expected to sound different to every reasonable audiophile, while demanding that I actually test every one or a statistically significant sample of them, live and in person.
 
If your assertions are correct, then I should be able to do very little testing, and if I find no audible differences demonstrate that your hypergenerality is seriously flawed and make my point. This has been done many times in private and no differences were found. It was felt that the well publicized tests in 1986 and 2007 that showed the same results should suffice.

 
Dec 4, 2015 at 1:13 AM Post #1,571 of 3,525
I did an interesting experiment the other day. While i normally do not get into itunes, i heard a lot of good stories about the mastered for itunes albums. I also noticed that these often appear to be the same mastering as the hi res HDTracks releases (often released on itunes in the following week) but in lossy aac and at a fraction of the cost. Anyway a couple of mates and i did some abx tests of the remastered led zep iii comparing the hi res version with the mastered for iitunes version. None of us got better than 52%. Make that of what you will...
 
Dec 4, 2015 at 9:37 AM Post #1,573 of 3,525
The limitation is in human ears. It doesn't matter how high a frequency you want your stereo to produce and how wide a dynamic range, it all comes down to whether human ears can hear it.

Audiophools love to spend lots of money pushing the decimal point further and further to the left and making the frequencies go higher and higher, but at a certain point, it all becomes moot because only bats can hear it.


It's not about needing more dynamic range or a lower noise floor. It's about capturing the audio. 24/96 captures more of the audio than 16/44.1. Even 24/44.1 captures more of the audio.

With 96 vs 44.1 you don't have to worry about missing samples at the edges of 44.1. You also do capture more of te sound with 24/96 than you do with 16/44.1. When 24/96 is overkill is on these FM radio mastered CDs. Those are the CDs where the DR is compressed with the volume pushed to the point of distorting. With old recordings where the frequency range does not hit 20khz and might only hit 17khz, 24-bit it useful to get more resolution than 16-bit.

The way I see it, science can say that technically 16/44.1 is enough to capture all the sound. But it's not when you properly listen to a high-res recording.
 
Dec 4, 2015 at 9:54 AM Post #1,574 of 3,525
It's not about needing more dynamic range or a lower noise floor. It's about capturing the audio. 24/96 captures more of the audio than 16/44.1. Even 24/44.1 captures more of the audio.
The way I see it, science can say that technically 16/44.1 is enough to capture all the sound. But it's not when you properly listen to a high-res recording.

 
-The only thing 24/44.1 captures more of than 16/44.1 is dynamic range. You do not, repeat not, get finer resolution or anything of the sort. Just plain higher dynamic range.
 
Also, if the difference between redbook and hi-res is so glaringly obvious, how come the difference seems to disappear whenever someone tries an actual blind test?
 
Dec 4, 2015 at 9:58 AM Post #1,575 of 3,525
It's not about needing more dynamic range or a lower noise floor. It's about capturing the audio. 24/96 captures more of the audio than 16/44.1. Even 24/44.1 captures more of the audio.

With 96 vs 44.1 you don't have to worry about missing samples at the edges of 44.1. You also do capture more of te sound with 24/96 than you do with 16/44.1. When 24/96 is overkill is on these FM radio mastered CDs. Those are the CDs where the DR is compressed with the volume pushed to the point of distorting. With old recordings where the frequency range does not hit 20khz and might only hit 17khz, 24-bit it useful to get more resolution than 16-bit.

The way I see it, science can say that technically 16/44.1 is enough to capture all the sound. But it's not when you properly listen to a high-res recording.

 
What are these "missing samples at the edges of 44.1"?
 
The view of 24-bits as capturing more resolution is in fact entirely equivalent to its capturing more dynamic range. Any map of 16-bit values to 24-bit by simple integer multiplication will preserve relative volumes. If we use the mapping f(x) = x, then we end up with the viewpoint of 24 bits as having a lots of extra values above the max of 16 bit but with the same step size, allowing for my dynamic range. If we use the mapping f(x) = 256*x (the common way to convert to 24 bits from 16), then we end up with the viewpoint of 24-bits as having finer gradations between steps. The only difference between the two viewpoints (in a perfect world) is where you'd set your volume, or more accurately, where the robot is setting the volume so you can be far the hell away from this experiment.
 

Users who are viewing this thread

Back
Top