24bit vs 16bit, the myth exploded!
Dec 17, 2014 at 7:38 PM Post #2,056 of 7,175
  If it's transparent at 44.1/16 it doesn't matter how high you go, it's still going to be transparent. No point reinventing wheels. Better to focus on things that actually make sound better.

I don't agree.  This discussion has been had with tape, whether redbook is enough for tape.  So let's say pro tape is your source at 70 dB and you loop in redbook codec at 96 db.  Testing yields transparent.
 
Now you come with some new system at 130 dB source, maybe it becomes detectable? It's hard to know the end-to-end ADC-to-DAC full system resolution of the SACD/DVDA played (in part because they weren't listed in the paper).  What if they were just upsampled redbook in reality?
 
So you need to read the paper carefully because they didn't make sweeping claims.  Their point was the SACDs and DVDAs tested could just have well been recorded on Redbook.  And as you see, I granted their point in my first sentence.  And for that I think it's a good and useful test.
 
Someone simply needs to redo a similar test on 192/24 and publish it.  With the state-of-the art recording done with current best-in-class ADC and DAC.  If nobody can find a recording that's detectable then that tells you something.  Pono is going to have some (or already has some) 24 bit - try it.  I can't get at that site.
 
Dec 17, 2014 at 8:33 PM Post #2,057 of 7,175
The problem is that you are looking at numbers and specs as abstract things, not representations of *sound* that people can or can't hear. You are entirely focused on the specs of sound reproduction formats, but you haven't done any research into the specs of audibility thresholds for human hearing. Until you put the numbers into context with what your 100% human ears can actually hear, you will keep chasing down the rabbit hole of "bigger numbers equals better".
 
For instance, do you know the dynamic range that humans can hear *in music*?
 
Dec 17, 2014 at 9:00 PM Post #2,058 of 7,175
  The problem is that you are looking at numbers and specs as abstract things, not representations of *sound* that people can or can't hear. You are entirely focused on the specs of sound reproduction formats, but you haven't done any research into the specs of audibility thresholds for human hearing. Until you put the numbers into context with what your 100% human ears can actually hear, you will keep chasing down the rabbit hole of "bigger numbers equals better".
 
For instance, do you know the dynamic range that humans can hear *in music*?

Is this directed at me?  I have already posted at length that I am well aware that 100 db SPL is the sound of a jackhammer at 1m, and that 10 dbSPL is a mosquito in the corner of a sound proofed room.  So yes, I am well aware that humans are not supposed to be able to hear more than 100 dB without pain.  If that is your question. 
 
So if we feed a 130 dB source and our ears limit to 100 dB and pass it though a prallel 100 dB redbook codec the ABX should fail on a large sample and large number of ear-pairs.  
 
So where is the link to that test?
 
If I could find a convincing study with updated equipment I would run away.  The only thing I found so far was a guy on Anand tech who in the last couple yeasr (I think it was last year) did some ABX'ing on DACs.  It was a great test, I learned stuff from it, but it didn't fully answer the format question and that wasn't what he was investigating.  I have seen a number of others but they all were flawed for the question I was asking, even if they were perfect for the question they were asking.
 
I (and others) are mystified why this is so hard to answer.
 
Dec 17, 2014 at 9:36 PM Post #2,059 of 7,175
Our ears can only hear about 40-50dB of dynamic range at a time. Any more than that and our ears need a few minutes to adjust to the different volume level. That means that if you are listening to music with peaks normalized in redbook, the whole bottom 50dB or so is completely inaudible. Not only that, the room you are in right now has a noise floor of at least 30dB. Anything in the recording below that is going to be masked by the room tone anyway. Dynamic range in recordings extends *downwards* not upwards. Peak level is peak level no matter what format you are listening to. So if you are playing a high bit depth recording at a volume of 90dB, which is plenty loud, it is going to sound EXACTLY the same as a redbook recording at 90dB, because you are only hearing the top 40-50dB anyway.
 
You can go ahead and call for tests, but I don't see why anyone should want to waste their time or money on conducting listening tests on things that are clearly inaudible. You might as well do tests to find out if people can see X rays or feel the earth rotating on its axis.
 
I'm not trying to be argumentative here. I'm just pointing out that human hearing has its limits, and unless you fully understand what humans can and can't hear, you are going to waste a whole lot of time chasing down things that don't make a bit of difference. If you know how hearing works and how sound reproduction works, you can focus on the things that actually makes music sound better.
 
If you are just interested in theory, and not practice, feel free to ignore my comments. It's fine to play intellectual games if you enjoy it. But it won't make your stereo sound any better.
 
Dec 18, 2014 at 2:31 AM Post #2,061 of 7,175
Alright, the good and the ugly.  I skipped over bad completely.
 
Back to the clarinet, I market just 3 seconds up to that 6th high note.  I've had success there before. I skipped all other sections.
 
I took more time listening to ABXY until I felt sure. Volume was medium to low - I found higher volume fatiguing.  I would say it was equivalent to listening to a live clarinet played medium loud about 6-10 ft away. I'm listening for a certain roughness or natural distortion like you get playing in a wood-floor room right at the front end of that note. I'm NOT hearing or listening for noise.  Note the streak of 7 right out the box.  Note all my other runs on this track are all way below 50%.  If I'm really guessing I'd expect to see mean reversion and some time over 50% and also some long streaks of wrong guesses.
 
Cassandra on the other hand eluded me yesterday.  I did that one with "look at results" unchecked.  The runs before were all over the place, but if I hit 6% I swung to 75% pretty quickly.  I'm guessing on Cassandra. I've never really found a note to latch onto. 
 
For test192 (clarinet) the thing that impressed me is I never had a losing streak of more than 2 (only 3 times). Yet I had a winning streak of 7, and the other day I think 5 and some others.  I really should see the 5+ losing after so many trials if guessing.  Mean reversion. Fatigue is definitely a problem.  You hear it then listen more and your mind switches it around on you. It's there then switched.  But this time I paused and took time when the switch happens, longer trials I'm battling. I only committed when i felt sure.
 
Conclusion? Hmmm. I'm 80% sure I'm not guessing on Vivaldi and Clarinet.  The effect I hear is very subtle and susceptible to fatigue.  But there is something audible.
 
foo_abx 1.3.4 report
foobar2000 v1.3.6
2014/12/17 21:51:24
File A: C:\Users\Public\Music\HDtracks\Various Artists HDtracks Sampler\HDtracks 2014 Sampler\test192.flac
File B: C:\Users\Public\Music\HDtracks\Various Artists HDtracks Sampler\HDtracks 2014 Sampler\test192_16.flac
21:51:24 : Test started.
21:54:00 : 01/01  50.0%
21:54:44 : 02/02  25.0%
21:55:43 : 03/03  12.5%
21:56:20 : 04/04  6.3%
21:57:19 : 05/05  3.1%
21:58:47 : 06/06  1.6%
22:00:28 : 07/07  0.8%
22:02:40 : 07/08  3.5%
22:09:00 : 07/09  9.0%
22:09:51 : 08/10  5.5%
22:11:48 : 08/11  11.3%
22:14:34 : 09/12  7.3%
22:15:08 : 10/13  4.6%
22:16:53 : 10/14  9.0%
22:17:27 : 10/15  15.1%
22:19:06 : 11/16  10.5%
22:19:44 : 11/17  16.6%
22:21:06 : 12/18  11.9%
22:23:27 : 13/19  8.4%
22:24:13 : 13/20  13.2%
22:24:38 : 14/21  9.5%
22:25:24 : 15/22  6.7%
22:26:35 : 16/23  4.7%
22:28:02 : 16/24  7.6%
22:29:08 : 16/25  11.5%
22:29:35 : 17/26  8.4%
22:30:09 : 18/27  6.1%
22:30:57 : 18/28  9.2%
22:33:43 : 18/29  13.2%
22:34:14 : 19/30  10.0%
22:34:43 : 20/31  7.5%
22:35:29 : 20/32  10.8%
22:36:07 : Test finished.
 ---------- 
Total: 20/32 (10.8%)
 
foo_abx 1.3.4 report
foobar2000 v1.3.6
2014/12/16 21:05:07
File A: C:\Users\Public\Music\HDtracks\Various Artists HDtracks Sampler\HDtracks 2014 Sampler\08-Another Country.flac
File B: C:\Users\Public\Music\HDtracks\Various Artists HDtracks Sampler\HDtracks 2014 Sampler\cassandra16.flac
21:05:07 : Test started.
21:07:24 : 01/01  50.0%
21:08:32 : 01/02  75.0%
21:09:11 : 01/03  87.5%
21:09:50 : 02/04  68.8%
21:10:35 : 02/05  81.3%
21:11:15 : 03/06  65.6%
21:12:00 : 03/07  77.3%
21:12:42 : 04/08  63.7%
21:13:24 : 04/09  74.6%
21:14:14 : 05/10  62.3%
21:14:52 : 05/11  72.6%
21:15:32 : 06/12  61.3%
21:16:14 : 06/13  70.9%
21:16:52 : 07/14  60.5%
21:17:27 : 08/15  50.0%
21:18:02 : 08/16  59.8%
21:18:57 : 09/17  50.0%
21:19:33 : Test finished.
 ---------- 
Total: 9/17 (50.0%)
 
Dec 18, 2014 at 3:44 AM Post #2,063 of 7,175
  10% generally isn't considered conclusive, and HDTracks is known to have audibly different masters for the higher resolution files. Or did you resample it yourself?

Scroll up ... 15 or 20 pages haha.  I resampled the 24 using SoX -16.  Resampling is a must, I've seen a number of 16 v 24 ABXs but as soon as I poked my nose in to learn about the files it became clear they were different masters.  
 
My HDtracks sampler didn't give me 16 bit versions. I guess I would have to try a paid version? Someone actually recommended Random Access Memories. 
 
The funny part would be if the HD Sampler is actually just upsampled 16 and that is why I'm having trouble ... or if I do hear something it is an artifact from the 44-192 conversion.  :)
 
But note today I'm having the most success with the Linn track.  I guess I could try their 16 vs my 16 for grins.
 
I agree 10% isn't conclusive, but the overall experience is hard to totally dismiss.  It's particularly the lack of mean reversion on that track that puzzles me.  If I saw 23/30 I would consider that conclusive, but I never have.
 
Dec 18, 2014 at 8:17 AM Post #2,064 of 7,175
  Scroll up ... 15 or 20 pages haha.  I resampled the 24 using SoX -16.  Resampling is a must, I've seen a number of 16 v 24 ABXs but as soon as I poked my nose in to learn about the files it became clear they were different masters.  
 
My HDtracks sampler didn't give me 16 bit versions. I guess I would have to try a paid version? Someone actually recommended Random Access Memories. 
 
The funny part would be if the HD Sampler is actually just upsampled 16 and that is why I'm having trouble ... or if I do hear something it is an artifact from the 44-192 conversion.  :)
 
But note today I'm having the most success with the Linn track.  I guess I could try their 16 vs my 16 for grins.
 
I agree 10% isn't conclusive, but the overall experience is hard to totally dismiss.  It's particularly the lack of mean reversion on that track that puzzles me.  If I saw 23/30 I would consider that conclusive, but I never have.

 
Look back at my spectrogram of the difference file from the clarinet track. With a proper conversion that is all you should be hearing different: noise at around -100dB. Perhaps you should look at such a difference for your files. Also, don't get too caught up in your runs. I made a random draw of 500 Bernoulli trials (p=0.5), and got the following runs:
Code:
      1  2  3  4  5  6  7  8 11   0 55 34 15 12  4  2  1  0  0   1 63 34 14  7  1  1  2  1  1
 
So if you happen to get the 8 and 11 runs early on, you might think "hey why am I not reverting to the mean", but it's still really just luck.
 
Dec 18, 2014 at 8:42 AM Post #2,065 of 7,175
 
 
 
For 3), I marked a number of passages (breathiness of clarinet on first long note, quiet piano parts, fade out, etc.), and tried my best to hear a difference in a couple of marks for each sample. I really couldn't latch on to anything, and the difference spectrogram concurs: the only difference is some noise, and that's already at -100dB or so and that's in the noise-shaping region. I went ahead and downloaded Linn's recording of the Poulenc organ concerto, and it's the same thing: just a bit of noise is the difference.

Yup, I tried "breathiness" of first clarinet note as well.  At first my mind was sure it sounded more resolved or "real" on the 24 but when I tried to use that to ABX I did horrible.  So I abandoned that pretty quickly.  
 
I don't consider white noise (or even slightly pink noise) to be an issue with 16.  Humans have an amazing ability to tune out random backgrounds, go back and listen to tapes from the 80s compared to CDs.  I never hear or notice noise on redbook, and haven't heard on any testing.  It is too far down.  So I don't consider it valid to find a short gap, mark that and amplify the heck out of it until you can hear noise and ABX that to pass.  Yes I may be Kirk, but even he wouldn't do that.


but noise floor is or at least should be the only difference from changing the bit depth. so by dismissing the noise floor difference, I see it like dismissing the factual difference to look for something that shouldn't exist. I never said you were not hearing something, but you have to admit that you're mindset for going at it is strange.
it's always possible that the conversion somehow changed something it shouldn't, then the abx prog will turn the 2 files into 32bit pcm tracks for the test, those 2 pcm streams will then be turned down to whatever you have set on your computer's output(foobar is in 24bit? are you using direct input? ...). it's all a matter of adding zeros and then cut them out, it shouldn't change the value of the music sample themselves as they are well above 16bit in value. but still that's a lot of ups and downs for a file and maybe somehow somewhere, something goes wrong for one of them? but even if that happens, how can you attribute the difference to the track being in 16bit, or even to the first conversion to 16bit?
by turning the 16bit back to 24bit, you add yet another change, but at least you know that the computer will treat both files the same way. that's why even though that should not change a thing, most of us will do that and put the files we want to test back into a common resolution/file format. just to be sure we're testing the track and not the computer or the DAC.
but when we make suggestions, apparently if it's not RRod you decide you know better. (yeah I'm jealous
frown.gif
).
same for dithering, if the noise floor isn't audible why should that matter to add dither? CDs are dithered, so it would seem like a more honest comparison.
 
as for us knowing the result in advance and trying to see you fail (or whatever), there had been quite a few tests done before you, a few AES papers,and of course tests we did for ourselves. this isn't really cutting edge experiment and anybody can do it with free and easy to use tools. so yeah we tend to feel like we already know the end of the movie. I really don't see what's wrong with that? people getting positive results are marginal, and usually when they don't run away insulting us for doubting them, it ends up that the files were different, one way or another. you can feel offended when we suspect something done wrong in your trials if it pleases you, but it's not because we secretly hate you or that we're all members of the 16bit lobbying cult. it's because you offer us an unlikely result that goes against what is mostly recognized(by science and engineers, maybe not so much by "audiophiles"
biggrin.gif
) for abx at normal listening levels.
 
if you come telling me that you can abx a flac from a mp3@96kb I will not try to find a reason why you succeed, because it is expected for you to do so. it has nothing to do with egos or knowing better, it's about you saying that you can identify 16bit from 24bit when pretty much any controlled tests resulted in people unable to tell 16/44 from any superior resolutions whatever the file format or the resolution. DVD, DSD, PCM they all failed to show audible differences one after the other. and that's why we look for a reason for you more than guessing results that isn't the track being 16bit. maybe RRod can send you his converted file or you send yours(short sample else copyright police will strike us all dead) so we can start by making sure the conversion went ok? that would be one less possible bias in the way.

 
mea culpa. I had at least one other big ass trial in head(and a few small 2 or 3 people stuff) and was pretty confident, it so happens that it was the stuff about stradivarius that I seem to have mixed up somehow in my sad and poor memory... so there seem to be no other trial done with big number of participants and exhaustive controls. sorry about that. I hate it when people claim false stuff, so I'm slightly mad at myself right now. but then again, I'm expecting to be wrong from time to time so I accept the idea readily.
now I still believe there is a general... maybe not consensus because that doesn't ever exist in the audiophile world, but close enough, about what bit depth and sample rate do and don't do to a track. and it's usually conclusive with measurements of noise and disto. so the mystery isn't really one.
 
@ greenears, I also think that you're still dismissing the one and only logical reason for a difference. maybe I'm on a mistake streak, but to me without dither the only thing you do when changing bit depth(admitting that it doesn't get low enough to crop the audio signal recorded), is adding or removing zeroes to the sample word length. so I even have a hard time getting how a software could do that wrong. and even if it did, that would still have zero impact on the first 16bit of sound. meaning that you're noticing something at -96db or whatever error we can get from the LSB.
we're really factually only changing the voltage values/loudness of quantization errors (unless the DAC actually uses a really different way to deal with 16 and 24bit? but that isn't a concern with foobar's abx as both files will end up with the same bit depth). so the very first thing below -96db that you'll notice is and will be the noise you decided to dismiss(plz anybody if I'm saying something wrong tell me).
 what is your hypothesis is what I'm asking I guess?
 
that's only the third time I mention it, but did you try with added dither to see if it becomes less easy to abx or not? if the noise is involved, shaping it should have an impact(at same loudness for the tests).
did you try turning the 16bit back to 24bit to abx 2tracks with the same resolution? so we know for sure that it's not the abx pluging failing at one of the most simple task ever when going 32bit pcm. I really don't believe it could be an issue, but if we're looking for the improbable, why not there?
for the tracks where you have better results, did you check if the song peaks were far from 0db and thus helping to rise the noise floor. this was already suggested with normalization by others. its very obvious that a track with peak loudness at -20db will make the noise more likely to be heard than some track reaching 0db often.
 
except repeating abx on the same basis, that can only really hope to improve the statistic significance if you decide to add all the results, are you willing to try zeroing in on the possible cause of difference?
all in all what are you looking for with those abx?
 
 
 
  Our ears can only hear about 40-50dB of dynamic range at a time. 

Where is the study/paper that proves that by experiment? It's so hard to find this stuff.

Steve Eddie talked about like 100db range for sounds with a significant delay between listening to both extremes(don't know if it's like say, 20 and then 120db in a normal environment, or if it is in an anechoic chamber). and 60db for instantaneous dynamic.
lossy formats considered "transparent" seem to have no difference above -60db


I know I pass mp3 192 with only a little effort so I find the 60db idea consistent(I fail mp3 and aac @320). I also tried adding tones and musics at different loudness inside other songs and check if I could hear them. and decided the -80db was a very very safe limit for me to stop carring with my listening volume(obviously the song weren't all stuck to 0db).
 
then there is the fact that we can only go so loud before it's hurtful to us, and that the ambient noise in a room will probably be at least 30 or 40db, I don't remember for our own body noise but I think it was about 10 to 20db(really not sure about that one).
and all in all albums rarely pass 60db anyway(because it's our limit? IDK, chicken or the egg kind of problem probably).
 
Dec 18, 2014 at 1:21 PM Post #2,066 of 7,175
  Where is the study/paper that proves that by experiment? It's so hard to find this stuff.


It's easy to find. Just google "human hearing thresholds of perception" and add the particular aspect of sound that you are searching for.
 
Here is info on how the ear adjusts to louder sounds. http://hyperphysics.phy-astr.gsu.edu/hbase/sound/protect.html#c1
 
Dec 18, 2014 at 1:52 PM Post #2,067 of 7,175
@RRod
 
Chance of long runs of success:  
As I said a few pages back I'm well familiar with binomial distribution and I know that with increasing number of trials the chance of seeing any given run length increases asymptotically towards 100%.  But my total runs on test192 are maybe high double digits, and I gravitate towards 10% hitting 5% and 1% in places.  And the better success (in both tracks) corresponds to tracks where I heard a specific note.  Remember there are other tracks where I couldn't find anything to latch onto and I did much worse.  To say it could still be chance is always true - anything can always be chance.  But at this point I am starting to believe I may be hearing something.
 
@castleofargh
 
Lack of previous tests:
I'm glad the error of your ways you have learned, my young padawan.  (insert maniacal laugh).  :) But seriously, THIS is the biggest problem.  I (and others) have posted similar questions of various forums of knowledgeable people.  Predictably the same thing always happens, the answers quickly sort out into the two camps.   The camps agree on absolutely nothing, EXCEPT that they are both so sure of their position they consider testing is a waste of time and any test that is contrary to their position must be flawed.  So this is IS a Kobayashi Maru situation for the tester (that was the best reply to my whole thread IMO).  It's exactly like the expectation of listening - people that read a test result once a while back remember it's conclusions the way they want.  If you go back and actually read the test results, it didn't actually test for that conclusion.  I have looked hard and challenged many, and I am coming to believe there is no 24 v 16 test that has proper results posted.  That is a problem for the industry.
 
No dither:
The reason for no dither is simply testing time. I wanted to pass the first one and then make it harder.  But the first one was hard enough. I'm going to take a two week hiatus from testing and posting, then I'll be back and probably try dither.  Frankly I don't think I stand a chance with dither added, but I need to buck up my courage.
 
Noise:
I have said repeatedly, but people refuse to believe me, I am not rejecting any of the noise theories.  I am well versed in the theory of noise, I know it is around 100 dB etc etc.  I'm not willfully ignoring it - I just don't hear it.  I don't hear it on CDs I don't hear it on MP3s and certainly not on any of these HD tracks, even with the volume up.  I'm using headphones and there is a limit to how loud I am comfortable with.  
 
So what am I hearing?
That's a 64 bit question.  I don't know, I think there is scant research on the very edges of audible quantization errors.  I don't know why, but I suspect for communications research in ADC they experiment with shaping the noise in different ways to get a better bit error rate, which is relatively easy to measure.  Although a lot of the techniques developed for Comms get used in Audio, it's not the same end game.  For audio my impression is that most of the research dollars in the 90s was in the psychoacoustic models for compression.  To test that, you put 100 average subjects with average or typical consumer listening situations and fiddle the model until they are can't discern.  It's not exactly about finding the boundary in the best uncompressed conditions.  For people making high end equipment, they all need their secret voodoo sauce or marketing and they have scant interest in funding tests that may prove there is no voodoo and no sauce.
 
So still what am I hearing?
My best guess lies in a misconception that many have about quantization noise.  Please open any standard EE textbook on signals and systems.  The first thing you will read is that Quantization errors are a non-linear process and cannot be completely analyzed mathematically.  The ~6db per bit idea (which is where you get your 100 dB and 60 dB) is an approximation.  I'm not making this up it says it right there in the textbook.  I saw a U of Waterloo paper (google) that had a good intro summarizing Q noise, google it.  The analogy is similar to FM and AM radio.  In EE you learn how to completely analyze AM using Fourier and Laplace.  Every detail can be described by nice equations with precise answers.  Then the next thing you learn is that FM is a non-linear process that has no equivalent equation.  But it sounds better.  This revelation is very frustrating to young padawans, but you get over it after a few weeks.  A few tricks and approximations and maybe computer simulations are used to analyze FM to enough extent to be able to use it.  Same with Q noise - and actually it has some similarities to FM with repeated short spikes throughout the spectrum.
 
FFT:
You also can't say conclusively you looked at the FFT and didn't see anything.  While I agree it's true that you aren't going to miss some 50 dB spike, there are limitations with FFT.  Signals move in time, FFT is a slice in time.  To convert between domains you need a window like Hann or Blackman and the windows have artifacts.  I think everyone that has worked with this stuff hands on knows this.  
 
Possible Theory:
So remember that a frequency shift and phase shift are the same thing (while they're shifting).  My suspicion is that human hearing is incredibly attuned to minute frequency differences, which make up what we call "tone".  Note how well we pick out peoples voices, or a Stradavarius, or a Gibson.  I'm sure you can pick out Mick Jagger or Bono or Bruce Springsteen in the first syllable.  No tones are pure, they all have distortion and we can pick out the slight differences in the higher order harmonics.  It may be that at some resonant frequencies the quantization introduces just enough frequency shift that you can detect it, or mucks with the relative amplitude of certain harmonics.  
 
Conclusions:
Well as I said I'm taking a 2 week hiatus but I'll be back with my phasers fully energized to take one last dig at this. I only found one poster that did similar tests, and also inspired me to try.  His results seem to match mine.  He also said it was very very hard to discern.  He could only detect on one of his headphones, the other was pure guess. I may be limited in equipment.  I've also thought about other possibilities.  The first is that the sample tracks weren't recorded in 24 bit, they were upsampled from 16. That would account for a fail, but not a pass, unless I'm hearing artifacts of the upsampling.  Since I am well familiar with upsampling algorithms, I think that is even less likely to be audible than 16 bits, but not non-zero. The other possibility is simply a minor bug or flaw in the 16 to 24 upsampling (if they did that) - also very remotely unlikely but probably more likely than the other theories.  Or a bug or resonance point in the multi-segment converter in the DAC that is triggered by either the 16 or the 24.  Probably more likely than the rest, but till very unlikely to be audible. 
 
I would very much like to conclude that 24 bit is hooey and move on. However four big things give me pause:  (a) lack of solid testing data published on this problem, and other anecdotes similar to mine (b) Engineers and architects at TI, Cirrus/ESS, and Wolfson are not idiots. I've met some of these people they are very serious ridiculously smart and educated people.  Why would they all invest so heavily in 24 bit architectures for the last N years and push performance another 30 dB past 100, if there was absolutely no use for it whatsoever.  It could be Marketing, I know, but I still pause.  I've read their papers they seem to be trying seriously to make the DAC and ADC better.  And they are amazing already.  (c) People inside Dolby in the 90s, who had no dog in this particular hunt at the time, told me the be all and end all was somewhere in the 18-20 bit range.  This was when Dolby AC3 was being commercialized for DVDs.  Dolby AC3 on DVD is 16 bit, but you may not know there were 18 and 20 bit AC3 options that were only available on the Pro equipment that Dolby made for the theaters and studios.  A friend of mine left Dolby and joined me at another company and we talked about it later and it seems they were doing serious science it wasn't fluff. (d) why would anyone bother with dither if 96 dB was 30 dB too much?  This one doesn't seem to be marketing because there is not "dither sticker" on the label of a CD.
 
I also note that apparently Neil Young posted on the Pono website in the last few days that Warner Brothers has a huge catalog of 24 bit material and he is pushing them to release it.  I don't know anything more than that, but if true, 24 bit mania may be upon us soon.  It would be nice to know the answer :)
 
I take the hiatus with the tentative guesstimate conclusion that the be-all and end-all is somewhere in the 18 bit ENOB area, give or take a bit.  Well implemented dither may get you some of the way there, and dither may get you to the point that detection will require such an exotic combination of gear and ears as to render it such that 1 in 1000 people can detect so it probably becomes moot.  Certainly a good recording trumps all of this by 100 country miles.  
 
Dec 18, 2014 at 2:22 PM Post #2,068 of 7,175
I'm beginning to think that you guys enjoy scientific testing better than listening to music.
 
Dec 18, 2014 at 2:40 PM Post #2,070 of 7,175
  @RRod
 
Chance of long runs of success:  
As I said a few pages back I'm well familiar with binomial distribution and I know that with increasing number of trials the chance of seeing any given run length increases asymptotically towards 100%.  But my total runs on test192 are maybe high double digits, and I gravitate towards 10% hitting 5% and 1% in places.  And the better success (in both tracks) corresponds to tracks where I heard a specific note.  Remember there are other tracks where I couldn't find anything to latch onto and I did much worse.  To say it could still be chance is always true - anything can always be chance.  But at this point I am starting to believe I may be hearing something.
 

 
We'll just see how things go with your reboot. Kudos for putting so much time into it. My only suggestion is to come back with a fixed statistical goal in mind, and just go with it.
 
As far as FFT, yes it is limited, but I'm willing to bet I could futz around with all the window and length options in Sox and still not get anything to pop out (I'll give it a try later). As far as the industry: it's always best for the people at the start of the chain to have the best specs, because anything they introduce is carried along the whole chain and can build up into actual audible error; that doesn't mean the end-user needs the same precision.
 
TBH, I'd be fine if we just accepted 24/176.4 or whatever as a new standard. The problem is that this wouldn't solve anything, because there will always be people who believe in "more is always better" with money to spend. Where would it stop? The second we'd standardize 32/352.8, some corporate-lackey researcher will come out with something about how MHz signals cause a little signal to pop up on a brain scan, and then the horses are off again: "We need 64/5.6448MHz and a flux capacitor to enjoy our rock master-tapes made in 1964!" Meanwhile guys like bigshot are sitting like the Maxwell man in front of properly set-up 5.1 x 16/44.1 systems and having a grand old time, instead of wondering if he accidentally just sat on his clip-on headphone pico-tweeter. (rant over)
 

Users who are viewing this thread

Back
Top