Why 24 bit audio and anything over 48k is not only worthless, but bad for music.
Nov 3, 2017 at 10:34 AM Post #2,446 of 3,525
I agree with you..... however this thread is not a discussion about enjoying music.
It is about some very specific assertions about what is and is not "humanly perceptible".

1)
No.
16/44.1 audio can do 5 uS timing differences easily ON CONTINUOUS SINE WAVES.
(and not so well under some other conditions and with some other waveforms.)

2)
I agree with you.
However, this thread is NOT about "whether high-res is worthwhile"...
It makes a very specific assertion (and it asserts, not that the difference is "nearly impossible to detect" but rather that the difference is "IMPOSSIBLE to detect").

In fact, the title of the thread actually makes two assertions that directly contradict each other (as does the original paper).
The original Xaph Audio paper actually asserts that the high-res version will sound audibly WORSE because of interactions between equipment.
(Which claim must be based on the idea that there will be an audible difference after all.)

Doesn't matter, because 16/44.1 audio can do 5 µs timing differences with complete ease.



Have you detected something? I haven't, but maybe that's because I listen to CDs for the music, not to detect limitations of 16/44.1 digital audio which I know are nearly impossible to detect at best.
 
Last edited:
Nov 3, 2017 at 11:19 AM Post #2,447 of 3,525
I should note something here......

When I'm talking about those "5 uS timing differences" I am NOT talking about jitter or timing errors between channels.

What I'm talking about is having the same sound recorded in both channels, with a time delay being added to one of the channels.

The "overall time resolution" of any digital recording of a continuous sine wave is essentially infinite.
If I take a 500 hz sine wave, and record it on a stereo CD, after delaying one channel by 5 uS, you will be able to easily resolve the 5 uS difference (on an oscilloscope).
The reason this works is because we can accurately reconstruct the two 500 Hz sine waves (in the two channels), and compare them.

HOWEVER, our brain uses differences in arrival time to "calculate" location.
Assuming I start with a sound located equally in both channels, I can "move" its apparent location from left to right by adding delay to one channel or the other.
(Our brain calculates that the source is closer to one ear or the other by comparing differences in arrival times at each ear.)

Now let's assume that I start with an unreasonably abrupt impulse (let's call it 5 uS).
(And, yes, I can easily generate a 5 uS sound pulse using various methods.)
Even though most of the energy in that impulse will be at inaudible frequencies, enough will extend into the audible range that we will hear it as a click.
And, if I delay that click in one channel or the other, it will seem to shift locations - between left and right.
However, because that click actually falls between two samples at 44k, we will NOT be able to precisely reconstruct its time from our 44k sample rate recording.
When we apply our band limiting, that impulse will indeed be spread out into a longer waveform that extends over multiple samples.
And, by looking carefully at that new waveform, we will be able to infer where the original impulse occurred in time.

HOWEVER:
1) in order to do so we will have to make certain assumptions about the filter we used
(we'll assume that, if there is equal amplitude in two samples, then the pulse was equidistant in time between them - but this assumption relies on our filter spreading the energy symmetrically in time)
2) the new waveform will be very different than the original
3) more importantly, unlike the original, our new waveform will have a much more gradual envelope
3a) as a result, mechanisms that rely on sensing abrupt edges of waveforms will be less able to accurately "find" the beginning edge of the impulse
3b) current research seems to strongly suggest that our brains do in fact look for those "edges"
3c) this in turn suggests that turning a sharp impulse into a more gradual band limited waveform may compromise the accuracy with which our brains can determine its exact beginning
3d) and this, in turn, suggests that doing so may reduce the accuracy with which our brains are able to utilize this particular location cue
(if the starting time of the impulse cannot be determined distinctly, then we will have the equivalent of a blurry image when we attempt to compare them)

The result of all this MAY be that our brains end up being less accurate in their estimation of where sound objects are located in space.
The result could be that objects seem to be in different locations, or that we perceive the location of individual instruments as being less distinct.
(A similar effect occurs with "stereoscopic 3D video"; when the various depth cues conflict, even slightly, which they often do, the image seems "less distinct and less real".)

Note that, if you accept the current "spectrum analyzer" model of the human ear (with a bunch of "detector hairs", each of which is "tuned" to a distinct frequency).
The frequency which the "top hair" responds to will determine the highest frequency we can detect as a continuous sine wave.
However, that number says nothing about the TIME RESOLUTION (how quickly, and how accurately, our brain can respond to WHEN a particular hair was excited.)

Please note that I'm not specifically suggesting that this will turn out to be true...... however I don't assume that it isn't true either.
Recent research in how your brain calculates eye movement has shown that the mechanism is quite different than what we previously thought... and not especially intuitive to most people.
Therefore, I prefer NOT to make claims based on inferences based on old or incomplete information...

I agree with you..... however this thread is not a discussion about enjoying music.
It is about some very specific assertions about what is and is not "humanly perceptible".

1)
No.
16/44.1 audio can do 5 uS timing differences easily ON CONTINUOUS SINE WAVES.
(and not so well under some other conditions and with some other waveforms.)

2)
I agree with you.
However, this thread is NOT about "whether high-res is worthwhile"...
It makes a very specific assertion (and it asserts, not that the difference is "nearly impossible to detect" but rather that the difference is "IMPOSSIBLE to detect").

In fact, the title of the thread actually makes two assertions that directly contradict each other (as does the original paper).
The original Xaph Audio paper actually asserts that the high-res version will sound audibly WORSE because of interactions between equipment.
(Which claim must be based on the idea that there will be an audible difference after all.)
 
Last edited:
Nov 3, 2017 at 11:30 AM Post #2,448 of 3,525
...
2. Rather ironically, the 1997 Theiss study I mentioned previously (Phantom source perception in 24bit @ 96kHz digital audio) set out to test exactly what you are suggesting. As I mentioned, there was a supplemental test on general perceived sound quality performed under less formal circumstances and it's this test which is frequently quoted by those who have a hi-res agenda but the main experiments were formal DBX tests designed specifically to test localisation and resulted in the conclusion that: "Analyses of the data showed that the hypothesis that localization accuracy improves with higher sampling rates above the professional 48kHz standard has to be rejected". (I linked to the paper above so you can read the details for yourself.)
...

Looking at that paper, it is pretty obvious, that this does not generate universal knowledge. The gear and setup is highly in question, particularly the use of speakers. Also, they do not describe all factors, particularly interconnects of gear and handling of digital noise. The results are only valid for this single experiment, and only using this particular setup. The sample is also rather small, particularly for a postivistic study.

As a study, it does not prove what is claimed at all. It violates the norms of great research:

"Analyses of the data showed that the hypothesis that
localization accuracy improves with higher sampling rates above
the professional 48 kHz standard has to be rejected."

To arrive at that conclusion, they need to prove that this is case. Which they do not. The setup is flawed, at best, and the samples are not all humans ever and all humans to come. At best, this study would indicate that the 48KHz standard for the setup at hand, used in this way, for the people in the sample, reaps little benefit. Using a closed headset, you obtain a better "laboratory" than this, which says an awful lot.

If the hypothesis was that this was valid for all humans, they would only need to prove it once. But how? They would need to prove that the reproduction was perfect, which they cannot.

Also, the results are not repeatable, given all the basic flaws given this paper.

Why anyone would refer to this as a great piece of research, is beyond me. It simply is not. To me, this is not valid research. "Research" conducted this sloppy, is simply not valid.
 
Nov 3, 2017 at 12:07 PM Post #2,449 of 3,525
You'll get no disagreement there from me.

My point was simply that, from a scientific point of view, if I'm trying to prove whether a difference is audible or not, I am in fact going to do my best to construct a test signal where it will be audible.... because, if it is audible under ANY conditions, then I have proven the assertion that "it is audible".
If I fail entirely, after making a reasonably thorough and competent attempt to prove my assertion, then we can reasonably conclude that it ISN'T audible under any test conditions we could currently devise. And, if I succeed, and it does turn out to be audible with some specialized test signal, then we can move on to determine whether it is audible under "reasonable and practical" conditions, and how much that should concern the average consumer.

I also agree that it should be possible to construct a test that is more sensitive to any difference that actually exists than any sort of listening under "normal conditions" - because the whole point of a test protocol is to maximize your chances of a definite result. (And, while I've heard a few valid points, I tend to agree that most claims to the contrary are simply ways of rationalizing they they didn't get the "obvious positive result" they expected.)

My honest assessment of the current status of this argument is this..... A significant number of audiophiles are convinced that the difference is so obvious that it should be easily audible. Based on this assertion, several studies have been performed, most of which have so far failed to produce any positive results. (But, of course, a lot of people who are simply "believers" aren't going to believe any results that conflict with their beliefs anyway.) However, because all of the studies I've read about have also been deeply flawed, I do not consider their results to be conclusive. If and when a properly designed and executed test shows positive results, then we can move on to wondering about whether the results are meaningful with normal music, in normal listening conditions. And, if a properly designed and executed test FAILS to produce positive results, then obviously there will be no next stage.
Scientific research involving human perception rarely returns such highly polarized black/white results.
Historically, however, I remember a time when many people insisted that a good quality cassette recording was "indistinguishable from the original" - which I don't think most people would claim today. (Remember "Is it real or is it Memorex?").
Yes, but that was entirely marketing hype. At their absolute best cassettes were always distinguishable from a master quality input signal. Of course they rarely ever received that kind of signal, yet even using the absolute best equipment, the results were audibly flawed. Please realize that professional tape was and is also distinguishable from its input signal, and cassettes are far, far below that in terms of performance.
I also remember when MP3's were touted as "being indistinguishable from the original" - because "the psychoacoustic research has all shown that nothing audible is being omitted from them".
And they are today, but MP3s cannot be discussed as if it's a single fixed parameter unit. Early MP3 usage was deliberately at very low bit rates of necessity dictated by the available bandwidth at the time. While MP3 may not be the most efficient lossy codec, when sufficiently high bit rates are used indistinguishability can be achieved.
Few thought earlier low rate files in common use then were transparent.
However, the technology changes, the quality of the master content we have available continues to improve - at least sometimes, and our expectations change. (Perhaps a good quality cassette recording was able to match the quality of a master tape; but that doesn't mean it can match the quality of a good quality modern digital master.)
(Cassettes at their best never came close to matching a master of any kind)
However, based on history, I'm not convinced that "there's no possible difference with high-def content, so we shouldn't even wonder". Personally, I would very much like to see results that can reasonably be considered to be conclusive - one way or the other - from a well designed and properly run test. However, I don't think we've reached that point yet... and I don't see any real movement in that direction.
I actually agree with this. If there is any possible difference we should know what it is. But we must first start with determining if there is a difference and under what conditions. That has not been done well. But when contrasted to all previous recording methods, all of which were easily and immediately distinguished from a clean input signal, after about two decades of use there remains no overwhelming statistical fallout of clear differentiation. That fact alone is a strong indicator that differences, if any, are not significant.
As I've mentioned before, I don't think the sellers of high-res content will ever sponsor those tests - because the value of being proven right is outweighed by the risk of being proven wrong (and even being proven right - but by a narrow margin - would probably do more harm than good to their sales). Likewise, nobody has a vested interest in proving that high-res files aren't better (because nobody makes money by convincing you NOT to bother to buy that next remaster).
Yes, of course HR sellers and manufacturers won't test this. They're operating on expectation, which sadly is a far more powerful influence than fact.
 
Nov 3, 2017 at 12:19 PM Post #2,450 of 3,525
There is an interesting argumentative formula being used here. Perhaps even more interesting than the line by line pick apart format...

I don't disagree with you. In fact I agree with you.

But... (then another long repetition of all the misconceptions that have already been answered and corrected several times before in the thread)

You guys have more patience with people who are only interested in talking for their own benefit than I do. Whenever someone says, "I'm not talking about practical application, I'm talking in theory here." I know we're in for a lot of repetitive back and forth. Pure theory doesn't need any anchor in reality. It doesn't need facts either. It just needs a whole lot of words. There are debates that are worthy of simply dismissing with the back of your hand. Circular arguments made by people who aren't listening and processing what people are saying in response aren't worth spending a whole lot of time over.
 
Nov 3, 2017 at 12:37 PM Post #2,451 of 3,525
I read through that study (well, I scanned it)...... and I agree with you that it was in general sloppy and had lots of procedural flaws.
It was also specifically intended to identify whether there was a correlation between sample rate and spatial resolution.
(And the results, as credible - or not - as we may consider them, at least suggested that there was no strong correlation to be found there.)

However, check out the section entitled "additional experiment into overall sound quality" (bottom page 13).
In that section, they presented a very limited number of test subjects (4) with samples of music at 48k and 96k.
After being presented with samples at both sample rates, the subjects were presented with unknown samples, and asked to identify what sample rate they were listening to.
This was a somewhat modified ABX sort of test.

The results of that test, and the conclusions based on them, were interesting....
One subject was correct 16 times out of 17 trials.
And two of the subjects were correct 61% and 68% of the time respectively.

In fact, while the presenters of the test concluded that they had failed to demonstrate any difference in perceived spatial positioning accuracy.....
They also concluded that the differences between 96k and 48k were "clearly audible".... to at least some listeners.

From their conclusions:
"While there is little doubt that subject 1 reliably heard a difference between HDDA and 48 kHz reproduction, results of subject 2 and 3 need
closer evaluation. The probability that the results obtained from both subjects were randomly guessed is 6%. It is reasonable to conclude that
even in the 96 kHz to 48 kHz sampling rate comparison there was a perceivable difference."

I would have to say that using that test as "proof" that there is no significance between 48k and 96k in terms of spatial cues,
while ignoring the fact that it also concluded that there were other "clearly audible differences" would be a sort of cherry picking

(Bear in mind that, in order to establish that "humanly audible differences exist", we only have to produce a single test subject for which this is provably true.)

Looking at that paper, it is pretty obvious, that this does not generate universal knowledge. The gear and setup is highly in question, particularly the use of speakers. Also, they do not describe all factors, particularly interconnects of gear and handling of digital noise. The results are only valid for this single experiment, and only using this particular setup. The sample is also rather small, particularly for a postivistic study.

As a study, it does not prove what is claimed at all. It violates the norms of great research:

"Analyses of the data showed that the hypothesis that
localization accuracy improves with higher sampling rates above
the professional 48 kHz standard has to be rejected."

To arrive at that conclusion, they need to prove that this is case. Which they do not. The setup is flawed, at best, and the samples are not all humans ever and all humans to come. At best, this study would indicate that the 48KHz standard for the setup at hand, used in this way, for the people in the sample, reaps little benefit. Using a closed headset, you obtain a better "laboratory" than this, which says an awful lot.

If the hypothesis was that this was valid for all humans, they would only need to prove it once. But how? They would need to prove that the reproduction was perfect, which they cannot.

Also, the results are not repeatable, given all the basic flaws given this paper.

Why anyone would refer to this as a great piece of research, is beyond me. It simply is not. To me, this is not valid research. "Research" conducted this sloppy, is simply not valid.
 
Nov 3, 2017 at 12:47 PM Post #2,452 of 3,525
I would want to add one thing to what you've said......

Pure theory does not in fact require "an anchor in reality".
(Although it's usually a waste if not at least based on reality. After all, we're not discussing what unicorns prefer for dinner.).
However, if you want to present a theory as FACT, then it does require some solid references to reality.

I agree that the assertion that "nobody can hear any difference" is at the level of a theory...
As is the assertion that "at least some people can hear differences"...
And, at the level of theory, we can freely discuss our reasons for believing that either theoretical claim will turn out to be factually true...
However, NEITHER rises to the level of "fact" without proof.

I would also point out that this entire thread is mostly about theory.....
(For most of us, in practical terms, both the difference in price, and the difference in storage space and bandwidth, between regular and high res files are really inconsequential.)

There is an interesting argumentative formula being used here. Perhaps even more interesting than the line by line pick apart format...



You guys have more patience with people who are only interested in talking for their own benefit than I do. Whenever someone says, "I'm not talking about practical application, I'm talking in theory here." I know we're in for a lot of repetitive back and forth. Pure theory doesn't need any anchor in reality. It doesn't need facts either. It just needs a whole lot of words. There are debates that are worthy of simply dismissing with the back of your hand. Circular arguments made by people who aren't listening and processing what people are saying in response aren't worth spending a whole lot of time over.
 
Nov 3, 2017 at 1:26 PM Post #2,453 of 3,525
I read through that study (well, I scanned it)...... and I agree with you that it was in general sloppy and had lots of procedural flaws.
It was also specifically intended to identify whether there was a correlation between sample rate and spatial resolution.
(And the results, as credible - or not - as we may consider them, at least suggested that there was no strong correlation to be found there.)

However, check out the section entitled "additional experiment into overall sound quality" (bottom page 13).
In that section, they presented a very limited number of test subjects (4) with samples of music at 48k and 96k.
After being presented with samples at both sample rates, the subjects were presented with unknown samples, and asked to identify what sample rate they were listening to.
This was a somewhat modified ABX sort of test.

The results of that test, and the conclusions based on them, were interesting....
One subject was correct 16 times out of 17 trials.
And two of the subjects were correct 61% and 68% of the time respectively.

In fact, while the presenters of the test concluded that they had failed to demonstrate any difference in perceived spatial positioning accuracy.....
They also concluded that the differences between 96k and 48k were "clearly audible".... to at least some listeners.

From their conclusions:
"While there is little doubt that subject 1 reliably heard a difference between HDDA and 48 kHz reproduction, results of subject 2 and 3 need
closer evaluation. The probability that the results obtained from both subjects were randomly guessed is 6%. It is reasonable to conclude that
even in the 96 kHz to 48 kHz sampling rate comparison there was a perceivable difference."

I would have to say that using that test as "proof" that there is no significance between 48k and 96k in terms of spatial cues,
while ignoring the fact that it also concluded that there were other "clearly audible differences" would be a sort of cherry picking

(Bear in mind that, in order to establish that "humanly audible differences exist", we only have to produce a single test subject for which this is provably true.)

I find this sort of experiment addendum amazing by how suspicious it is. to put it there to hang at the end of the paper for the lolz. "oh BTW we could have proved something important, but sorry, research those days only gets funds for stuff where we debunk ourselves, discoveries and positive results are so overrated". ^_^
I don't know if they had the most unfortunate circumstances, or if a second follow up paper was projected but they found some flaw in their testing method or never got funded. but I would have lost my mind if I had been with them at the time.
 
Nov 3, 2017 at 1:28 PM Post #2,454 of 3,525
I was using the term with the casual definition, but if you want to get technical about it, Google is your friend... "difference between hypothesis and theory"

In scientific terms; A hypothesis is either a suggested explanation for an observable phenomenon, or a reasoned prediction of a possible causal correlation among multiple phenomena. In science, a theory is a tested, well-substantiated, unifying explanation for a set of verified, proven factors.

http://www.oakton.edu/user/4/billtong/eas100/scientificmethod.htm
Note number 4

The belief that things that have been tested and determined to be inaudible actually are inaudible is a theory. Making up reasons how you might possibly be able to hear something below that threshold is a hypothesis. Raising that to the level of a theory would require a whole lot of testing that validates it. So far, no tests I know of support that idea- only cherry picking and sales pitch.
 
Last edited:
Nov 3, 2017 at 1:29 PM Post #2,455 of 3,525
[QUOTE="KeithEmo, post: 13825887, member: 403988"

1)
No.
16/44.1 audio can do 5 uS timing differences easily ON CONTINUOUS SINE WAVES.
(and not so well under some other conditions and with some other waveforms.)
[/QUOTE]

I admit I don't understand this. Why does it only work on continuous sine waves? What goes wrong with other signals?

I tested this on Audacity. I created pink noise at 96 kHz. Then I duplicated it and delayed the duplicate 1 sample, 10.4 µs. Then I downsampled the original and duplicated noises to 44.1 kHz. Now, of course the waveform of the delayed noise differs from the original, but that's deceiving and doesn't correspont the analog signal after DAC. Now, I upsampled them back to 96 kHz and the waveforms look identical again, only bandlimited. The delayed one is clearly one sample behind the original as expected. So, I delay the original by one sample and invert it. I then sum the signals to get the difference which is the error signal. There is some quiet noise with most of it's energy on ultrasonic range. I downsample the error signal and the level of it is about -88 dBFS. While very quiet, the error should be zero. I don't know what is going on. What if I have been wrong all this time about digital audio and ALL my 1500 CDs actually sound very bad!! HORRIBLE!!! DO SOMETHING!!! Is even 192 kHz enough??!!?
 
Nov 3, 2017 at 1:49 PM Post #2,456 of 3,525
I would want to add one thing to what you've said......

Pure theory does not in fact require "an anchor in reality".
(Although it's usually a waste if not at least based on reality. After all, we're not discussing what unicorns prefer for dinner.).
However, if you want to present a theory as FACT, then it does require some solid references to reality.

I agree that the assertion that "nobody can hear any difference" is at the level of a theory...
As is the assertion that "at least some people can hear differences"...
And, at the level of theory, we can freely discuss our reasons for believing that either theoretical claim will turn out to be factually true...
However, NEITHER rises to the level of "fact" without proof.
And both statements are far too polarized and without situational qualification....so....
I would also point out that this entire thread is mostly about theory.....
Perhaps for some, not all. Like I said, 20 years and still no hard proof....
(For most of us, in practical terms, both the difference in price, and the difference in storage space and bandwidth, between regular and high res files are really inconsequential.)


But the actual handing and playing of it...not quite so inconsequential. For just one example, the largest base of installed home sound systems today includes an AVR. Even many here listen to music material only via an AVR. Most AVRs handle volume control, and certainly calibration EQ, at 24/48, meaning pretty much everything you hear is resampled (rate-up, rate-down, bit depth) except for film soundtracks. How's that getting 24/96 or higher out to a transducer...that can't reproduce it anyway? If everything you hear is funneled to 24/48, why bother with anything higher unless you like the remastering job?

Oh... sorry... practicality raises it's ugly head again. There are many other examples of the problem.

And...just because I'd expect somebody to ding me on it...yes, there are 24/96 and higher processors in the world, i.e. the Trinnov Altitude32 with internal 64bit FP @ 192kHz. Got a spare $30K?
 
Nov 3, 2017 at 2:08 PM Post #2,457 of 3,525
File size isn't inconsequential. I have a very large music library and carry a big chunk of it around with me on a 256 GB micro SD card. That's the biggest one they make, but it still only can hold a small fraction of the music on my music server in AAC 256 format. If I carried around 24/96 files, I would have to be constantly updating the card and shifting files back and forth.
 
Nov 3, 2017 at 2:16 PM Post #2,458 of 3,525
You bring up a very good point.... which is that the relevance of a lot of this depends on the market you're talking about. (And the marketing folks who are working to sell this stuff often try to ignore or cause their customers to ignore these questions.)

First, for many of those people, the difference between standard and high-res files probably isn't going to make any immediate difference.
However, as you alluded to, re-masters often sound better for other reasons, which are audible even on mediocre equipment.
Therefore, we might at least hope than an interest in "high-quality remasters" might encourage better re-masters in general.
(If you want to pursue that point, the majority of listeners probably listen to their music playing from their phone on $20 ear buds.)

Second, many of us actually purchase our music, and keep it in our collection.
Therefore, many of those people who currently own a mediocre sounding AVR may someday own better equipment, on which the difference will be audible.
I'm sure glad I have 2000 CDs instead of 2000 albums from iTunes..... even if the lossy compression used by iTunes might have sounded OK on the equipment I had twenty years ago.
Therefore it does make sense to avoid investing a lot of money on something that you'll end up having to buy again later... or to buy the better version as "insurance".
(This equation will be very different for people who use a streaming service rather than actually own their music.)

Third, you're really asking the difference between "better for everyone" and "better for a few select audiophiles"?
(Of course, from a sales point of view, the sellers would like everyone to assume they'll hear a difference.)


And both statements are far too polarized and without situational qualification....so....
Perhaps for some, not all. Like I said, 20 years and still no hard proof....



But the actual handing and playing of it...not quite so inconsequential. For just one example, the largest base of installed home sound systems today includes an AVR. Even many here listen to music material only via an AVR. Most AVRs handle volume control, and certainly calibration EQ, at 24/48, meaning pretty much everything you hear is resampled (rate-up, rate-down, bit depth) except for film soundtracks. How's that getting 24/96 or higher out to a transducer...that can't reproduce it anyway? If everything you hear is funneled to 24/48, why bother with anything higher unless you like the remastering job?

Oh... sorry... practicality raises it's ugly head again. There are many other examples of the problem.

And...just because I'd expect somebody to ding me on it...yes, there are 24/96 and higher processors in the world, i.e. the Trinnov Altitude32 with internal 64bit FP @ 192kHz. Got a spare $30K?
 
Nov 3, 2017 at 2:19 PM Post #2,459 of 3,525
Obviously that's all relative.

My main music library is currently on a 6 tB drive... and I don't carry it all around with me.
(At most, I may make a portable copy of a few hundred albums to take with me when I travel.)
In terms of cost, a 6 tB drive currently costs about $150....
Clearly we each have very different priorities.... perhaps FOR YOU AAC does make more sense.

I would also point out that I can always convert my 24/96k files into AAC if I need a small portable copy.
However, if my "master copies" were in lossy AAC, the reverse would not be true.

File size isn't inconsequential. I have a very large music library and carry a big chunk of it around with me on a 256 GB micro SD card. That's the biggest one they make, but it still only can hold a small fraction of the music on my music server in AAC 256 format. If I carried around 24/96 files, I would have to be constantly updating the card and shifting files back and forth.
 
Nov 3, 2017 at 2:21 PM Post #2,460 of 3,525
...256 GB micro SD card. That's the biggest one they make...

I'm sorry I'm always being a pedant in response to your posts, but...

...negative ghostrider, you can upgrade that now: https://www.amazon.com/Sandisk-Ultr...pID=41MQ9ndxA7L&preST=_SX300_QL70_&dpSrc=srch

And if you subscribe to Play Music All Access, and use that in an Android phone, you can upload another 50,000 songs from your collection on top of the stuff you can stream. I think you're an AAC guy? You could get a whole lot of 256k AAC on to a 400GB card, maybe 60,000 tracks, then you've got your 50,000 uploaded (I know you're a classic guy, but the GPM limit is number of tracks, so you could strategically upload your biggest stuff), plus all of that you can stream? There would be no end to the music you've got with you at that point.
 
Last edited:

Users who are viewing this thread

Back
Top