Testing audiophile claims and myths
Feb 1, 2019 at 9:38 AM Post #12,406 of 18,241
Absolutely true.....

However, part of what created the "imbalance of perception" is the difficulty with producing vinyl recordings (for end users).

It's simple for most of us to make a digital copy of a vinyl album - and note the tiny differences between the recording and the original. However, it's not practical for most end users to make a vinyl copy of their favorite CD, and note the much larger differences. Therefore, because we can't make vinyl recordings ourselves, it's impossible to compare the differences that result from both processes... and many people come to think of vinyl albums as some sort of "reference" or "master" - rather than as just another copy (which they have no practical way to compare to the original).

This also brings up another interesting point. When the early tests were performed to determine whether "the CD format was audibly transparent".... they "inserted a CD quality A/D and D/A loop into the signal chain" to see if anyone could detect whether it caused audible degradation of the signal. However, was their signal source a direct feed from a high quality set of microphones and a mixing console, or was their source an ANALOG MASTER TAPE? (If their source was an analog master tape, then all they could really determine was whether the "CD quality signal loop" introduced WORSE signal degradation that that already being introduced by their tape equipment... and we already know that analog tape has many flaws and limitations. And, even if a direct feed from a mixing console, with live music, was used... the quality of their test signal was still limited by the quality of the console and other equipment they used. And their results were limited by the quality of the playback equipment they used... which presumes that whatever speakers and playback electronics they had available in the 1970s were also "audibly perfect".)

In simplest terms, all they could conceivably prove was that "the Red Book CD format was audibly transparent when reproducing the sample content they had available, and actually used, when they tested it". In other words, if you wish to claim that the tests that proved that the Red Book CD format was audibly transparent are still relevant, you must base that claim on the assumption that there is no content available today that is of audibly better quality than what was used when they ran the tests, and that there is no playback equipment available today that is audibly better than what they used. (If all they did was to prove that Red Book CD was audibly transparent when used to reproduced analog master tapes, which were themselves NOT audibly transparent, then you have not proven the wider case to be true.)

There is not and never has been a lossless analog recording medium. When recording analog you had to spend time deciding your best compromise of trade off's for the recording you made. There is not a single piece of analog tape or vinyl the sounds anything like the the signal from the microphone(s). That doesn't mean it could not be manipulated into something pleasant but it was never accurate.
 
Last edited:
Feb 1, 2019 at 9:40 AM Post #12,407 of 18,241
[1] There is a technical detail about the sample rate chosen for the Red Book CD standard that needs to be mentioned to put the choices made into historical context.
[2] HOWEVER, oversampling technology was NOT yet developed when the Red Book standard was created.
[2a] And, without oversampling, the design criteria for the proper filter are so extreme that, as a result, most early equipment performed quite poorly (and equipment that performed well was extremely expensive).
Without oversampling, given the requirements for encoding and decoding signals "right up to the Nyquist frequency", there is a tradeoff:
- either use a somewhat gradual filter, and accept a high-frequency roll off that starts well below 20 kHz, as well as significant high frequency phase shift and significant aliasing distortion
- design a very complex filter, which is difficult and expensive to produce, and still introduces excessive phase ripple and other problems.
[3] Oversampling has essentially eliminated this issue entirely...

1. Unfortunately, yet another typical KeithEmo post. Yes, the CD standard does need to be put into historical context but despite your statement, you have not put it into historical context, you've created a historical context that never actually existed in order to push your "filters" agenda again!

2. No CD technology was yet developed when the redbook standard was created because you obviously can't have a CD or CD player before you've created a standard that defines what CD actually is! HOWEVER, oversampling as a technology certainly was known about and it's use was envisaged.
2a. These statements are all true BUT ENTIRELY IRRELEVANT because despite your misinformation, oversampling was developed and ubiquitously employed by CD players by the time that CDs were launched to consumers. As far as I'm aware, ALL CD players, from launch day onwards, employed at least 2 times oversampling.

3. Great, now we're getting somewhere. According to the actual history (that ALL CD players had oversampling) and YOUR statement that "oversampling essentially eliminates this issue entirely", then logically you must agree that the "issue" you've raised never actually existed by the time CD was launched to the public (1983) and is therefore irrelevant!
Analog reel to reel recorder tape recorders with best tapes achive S/N around 80 dB... - add to that 20-25 dB noise reduction offered by Telcom C4 and you end up with 100 -105 dB dynamic range.
This statement is false. 80dB SNR + 20-25dB Noise reduction does NOT result in a dynamic range of 100-105dB!! This is a classic case of "cherry picking"; of only listing those facts which support an agenda while omitting the other pertinent facts which contradict it. The reality is that the original recording session tapes (with up to 80dB SNR) were OF COURSE, NEVER RELEASED TO THE PUBLIC. What was released was a several generation old copy: The recording session tapes would have to be edited, mixed (EQ, compression, etc.) and commonly "bounced down" (recording several tracks to 1 or 2 tracks), each of these mix processes adds noise. When the mix is complete it's bounced down to another (final mix) tape, for transfer to the mastering engineer, the mastering engineer applies analogue processing, which adds more noise, often bounces down during the mastering process and then when the mastering is complete, then bounces down the completed master to another (master) tape. Each of these bounce downs (generations/copies) doubles the amount of tape noise and there would have been an absolute minimum of 2 generations but probably 4 or more. Then, the master tape was copied to a production master and finally the distribution media (cassette or vinyl) was copied from the production master, so another two generations. That's a bucket load of noise that's been added between the original session recording tapes and the final media the consumer buys, so what that 20-25dB noise reduction (or more like 15dB in the more common NR types) actually achieves is some restoration of the 80dB SNR we may have started with.

In the best theoretical case, if we were just making a test tape, we could record a test signal to tape (with say 80dB SNR) apply say 25dB noise reduction and bounce down the result back to tape. That's 1 generation of SNR loss and therefore: 80dB SNR - approx 6dB generational loss + 25dB NR = a theoretical max of roughly 100dB DR. Of course though, we end up with just one test tape! We can only theoretically achieve this DR figure by eliminating all the: Editing, mixing, mastering, creation of a production master and the duplication of it to create the actual consumer product. In the real/practical world of commercial consumer audio recordings the actual equation is more like: 80dB SNR - approx 35-45dB generational loss and analogue processing noise + 15-25dB NR = a theoretical max of roughly 55-65dB DR, which is roughly 100 times less than analogsurvivor is claiming and why all his conclusions/assertions are complete nonsense! And of course, we're only considering noise and ignoring all the other non-linearities and distortions of analogue.

In this sub-forum we tend to focus the details of digital theory and it's implementation, however in the world of commercial recording studios and those who actually make the music products, the single greatest benefit and selling point of digital audio over analogue (which blew all other considerations out of the water), was the elimination of generational loss!

I recall Ludwig commenting about that. He was not overly impressed with his first foray into digital mastering until the Apogee filters came out around 1984/85. He then described it as sounding exactly the same as the sound he heard in the studio.

I'm not sure about the context of that quote. There really wasn't any digital mastering in 1985, it didn't become even a practical possibility until a decade later and it was almost another decade before the mastering tools had improved to the point that mastering in the digital domain became a viable alternative. Remember that contrary to popular belief, the SPARS code (AAD, DDD, etc.) did not refer to the domain of the procedures but the domain of what those procedures were recorded to. For example, if we record the musicians to digital recording media, mix it in the analogue domain then record that final (analogue) mix to digital, then master in the analogue domain and record the completed master to digital, the SPARS code would be "DDD" (even though it's been both mixed and mastered in the analogue domain). If we're talking about the actual processes, then with the exception of a very few classical recordings (a couple of labels had proprietary digital systems and minimal mixing and mastering), pretty much all recordings up to the mid/late 1990's should have been labelled DAA, then gradually DDA and finally, DDD would have started appearing in the early 2000's.

G
 
Last edited:
Feb 1, 2019 at 10:58 AM Post #12,408 of 18,241
According to Philips, they were the only company to INITIALLY use oversampling in their CD players.
However, as per their description, it was quickly adopted by everyone else.
It was never part of the standard or specified by the standard.
So, at best, by choosing a sample rate of 44.1k, they created a practical design problem for which they already had a solution in mind.
(This isn't especially terrible... and one might even suggest that Philips strategically wanted everyone else to be "playing catch-up".)

However, my point remains....
It is generally not a good idea to create a standard in such a way that it is likely to be implemented poorly in commercial products designed using current technology.
It's a sort of "recipe for disaster" if a bunch of commercial products are released that claim to support your standard but don't actually work very well.
(You simply end up with a public perception that your standard doesn't work very well.... note how many audiophiles complained about "the poor sound quality of early CDs".)

To quote Phillips:
"However Philips’ oversampling technology, originally born out of the necessity to use the early 14 bit D/A converters, and dismissed as a 'technical joke’ by other manufacturers who believed that a true 16 bit D/A converter followed by a steep analogue filter was the only way to go, was quickly embraced by most manufacturers of CD players. Because it meant there was no need to use highly complex analogue filters, while at the same time it allowed the often serious non-linearities of the D/A converters that were available at the time to be concealed."

( https://www.philips.com/a-w/research/technologies/cd/technology.html ).

I would also be curious to know how many of the early studio A/D converters included oversampling or not.
(I have no familiarity with any of the early ones.)

1. Unfortunately, yet another typical KeithEmo post. Yes, the CD standard does need to be put into historical context but despite your statement, you have not put it into historical context, you've created a historical context that never actually existed in order to push your "filters" agenda again!

2. No CD technology was yet developed when the redbook standard was created because you obviously can't have a CD or CD player before you've created a standard that defines what CD actually is! HOWEVER, oversampling as a technology certainly was known about and it's use was envisaged.
2a. These statements are all true BUT ENTIRELY IRRELEVANT because despite your misinformation, oversampling was developed and ubiquitously employed by CD players by the time that CDs were launched to consumers. As far as I'm aware, ALL CD players, from launch day onwards, employed at least 2 times oversampling.

3. Great, now we're getting somewhere. According to the actual history (that ALL CD players had oversampling) and YOUR statement that "oversampling essentially eliminates this issue entirely", then logically you must agree that the "issue" you've raised never actually existed by the time CD was launched to the public (1983) and is therefore irrelevant!

This statement is false. 80dB SNR + 20-25dB Noise reduction does NOT result in a dynamic range of 100-105dB!! This is a classic case of "cherry picking"; of only listing those facts which support an agenda while omitting the other pertinent facts which contradict it. The reality is that the original recording session tapes (with up to 80dB SNR) were OF COURSE, NEVER RELEASED TO THE PUBLIC. What was released was a several generation old copy: The recording session tapes would have to be edited, mixed (EQ, compression, etc.) and commonly "bounced down" (recording several tracks to 1 or 2 tracks), each of these mix processes adds noise. When the mix is complete it's bounced down to another (final mix) tape, for transfer to the mastering engineer, the mastering engineer applies analogue processing, which adds more noise, often bounces down during the mastering process and then when the mastering is complete, then bounces down the completed master to another (master) tape. Each of these bounce downs (generations/copies) doubles the amount of tape noise and there would have been an absolute minimum of 2 generations but probably 4 or more. Then, the master tape was copied to a production master and finally the distribution media (cassette or vinyl) was copied from the production master, so another two generations. That's a bucket load of noise that's been added between the original session recording tapes and the final media the consumer buys, so what that 20-25dB noise reduction (or more like 15dB in the more common NR types) actually achieves is some restoration of the 80dB SNR we may have started with.

In the best theoretical case, if we were just making a test tape, we could record a test signal to tape (with say 80dB SNR) apply say 25dB noise reduction and bounce down the result back to tape. That's 1 generation of SNR loss and therefore: 80dB SNR - approx 6dB generational loss + 25dB NR = a theoretical max of roughly 100dB DR. Of course though, we end up with just one test tape! We can only theoretically achieve this DR figure by eliminating all the: Editing, mixing, mastering, creation of a production master and the duplication of it to create the actual consumer product. In the real/practical world of commercial consumer audio recordings the actual equation is more like: 80dB SNR - approx 35-45dB generational loss and analogue processing noise + 15-25dB NR = a theoretical max of roughly 55-65dB DR, which is roughly 100 times less than analogsurvivor is claiming and why all his conclusions/assertions are complete nonsense! And of course, we're only considering noise and ignoring all the other non-linearities and distortions of analogue.

In this sub-forum we tend to focus the details of digital theory and it's implementation, however in the world of commercial recording studios and those who actually make the music products, the single greatest benefit and selling point of digital audio over analogue (which blew all other considerations out of the water), was the elimination of generational loss!



I'm not sure about the context of that quote. There really wasn't any digital mastering in 1985, it didn't become even a practical possibility until a decade later and it was almost another decade before the mastering tools had improved to the point that mastering in the digital domain became a viable alternative. Remember that contrary to popular belief, the SPARS code (AAD, DDD, etc.) did not refer to the domain of the procedures but the domain of what those procedures were recorded to. For example, if we record the musicians to digital recording media, mix it in the analogue domain then record that final (analogue) mix to digital, then master in the analogue domain and record the completed master to digital, the SPARS code would be "DDD" (even though it's been both mixed and mastered in the analogue domain). If we're talking about the actual processes, then with the exception of a very few classical recordings (a couple of labels had proprietary digital systems and minimal mixing and mastering), pretty much all recordings up to the mid/late 1990's should have been labelled DAA, then gradually DDA and finally, DDD would have started appearing in the early 2000's.

G
 
Feb 1, 2019 at 1:21 PM Post #12,409 of 18,241
There is a technical detail about the sample rate chosen for the Red Book CD standard that needs to be mentioned to put the choices made into historical context.

As has already been mentioned, in order to encode an analog signal without serious distortion, the analog signal MUST BE bandwidth limited to avoid the Nyquist frequency.
So, for example, if you're encoding audio at a 44.1k sample rate to put on a CD, you MUST pass that signal through a sharp low pass filter that eliminates all content above 22 kHz.
Likewise, when the signal is reconstructed, you MUST again pass the output through a sharp low pass filter that eliminates all aliases above 22 kHz.
If you wish to maintain a flat frequency response, and minimal phase shift and distortion below 20 kHz, this calls for a filter that is flat up to 20 kHz, but has 70 - 80 dB of attenuation at 22 kHz and above.
This poses a serious technical problem... because any filter with performance even approaching these requirements is very complex to design.
Even worse, in order to build such a filter, you must use components that are precisely the correct value, and some of them are very expensive.

Virtually all modern ADCs and DACs use oversampling...
Oversampling essentially uses a 'trick" to allow the use of a filter that is far more gradual.
(This simplifies the process of designing a filter that is flat to 20 kHz, yet still provides excellent attenuation of aliases, and can be produced for a reasonable cost.)

HOWEVER, oversampling technology was NOT yet developed when the Red Book standard was created.
And, without oversampling, the design criteria for the proper filter are so extreme that, as a result, most early equipment performed quite poorly (and equipment that performed well was extremely expensive).

Without oversampling, given the requirements for encoding and decoding signals "right up to the Nyquist frequency", there is a tradeoff:
- either use a somewhat gradual filter, and accept a high-frequency roll off that starts well below 20 kHz, as well as significant high frequency phase shift and significant aliasing distortion
- design a very complex filter, which is difficult and expensive to produce, and still introduces excessive phase ripple and other problems

Oversampling has essentially eliminated this issue entirely... which is why it is so widely used.
However, since oversampling wasn't available when the standard was created, it was a bad idea to set requirements for the standard that impose such a serious compromise.
(Even raising the sample rate from 44.1k to 48k, as was recommended by some engineers at the time, would have significantly relaxed the tradeoff between cost, complexity, and performance.)

I knew all of this. I just don't worry about anymore problems of year 1983. when I was 12 years old and new nothing about oversampling and aliasing. I was building a large E.T. figure from Legos. I bought my first CD and CD player in 1990. How much did you suffer from 44.1 kHz sampling in 1983? Nobody suffered. CD was a new and exciting music format that took the transparency to a new level in home audio.
 
Feb 1, 2019 at 2:37 PM Post #12,410 of 18,241
I don't know about suffering.....

But I've had CD players and DACs since the 1980's....

And, back in those days, they most certainly did NOT all sound the same.

I knew all of this. I just don't worry about anymore problems of year 1983. when I was 12 years old and new nothing about oversampling and aliasing. I was building a large E.T. figure from Legos. I bought my first CD and CD player in 1990. How much did you suffer from 44.1 kHz sampling in 1983? Nobody suffered. CD was a new and exciting music format that took the transparency to a new level in home audio.
 
Feb 1, 2019 at 4:05 PM Post #12,411 of 18,241
But he's never done a controlled listening test to verify that subjective impression, because... (insert a million excuses based on extreme situations that no one would run across in the real world, doubts about the validity of scientific testing procedures, irrelevant analogies and declarations of ennui).
 
Last edited:
Feb 1, 2019 at 5:06 PM Post #12,412 of 18,241
I also never did a controlled listening test to confirm that my first $29 cassette player sounded audibly different than a CD...
(The hiss and lack of high frequencies just seemed sort of obvious.)

However, as I recall, I did have someone do some double-blind switching for me between my first external DAC and my then-current Rotel CD player.
The DAD was rather more expensive - and the difference was actually rather subtle.
(But, since I can't recall the exact model of the DAC, and I'm not at all sure what the brand was either, those results are somewhat moot. :beyersmile: )

But he's never done a controlled listening test to verify that subjective impression, because... (insert a million excuses based on extreme situations that no one would run across in the real world, doubts about the validity of scientific testing procedures, irrelevant analogies and declarations of ennui).
 
Feb 1, 2019 at 8:46 PM Post #12,413 of 18,241
I don't know about suffering.....

But I've had CD players and DACs since the 1980's....

And, back in those days, they most certainly did NOT all sound the same.

I believe you. DACs have come a long way since those days. They were 14 bit, jitter was probably HUGE etc. Also, the analog section of a DAC can give it's favor. The question is: Did those early CD plays and DACs give you more transparent sound than for example vinyl or not?
 
Feb 1, 2019 at 8:57 PM Post #12,414 of 18,241
(You simply end up with a public perception that your standard doesn't work very well.... note how many audiophiles complained about "the poor sound quality of early CDs".)
How many audiophiles? Some maybe, but they were on the outer. The vast majority of audiophiles were waxing lyricals about how much better CDs sounded compared to all that went before it.

I know because I was there in that era and right into hi fi. Went to many hi fi exhibitions where CD players were wowing the audiences while the turntables and cassette decks of the day were relegated to the side. Hi Fi mags were also waxing lyricals, apart from one or two writers like the crank Fremer.

So sorry, there were not "many" audiophiles complaining about the poor sound quality of CDs, quite the opposite. The complaints about sound quality of CDs really started from the mid to late 90s, when digital consoles were introduced into studios which enabled producers to crank up the loudness and compression.
 
Feb 1, 2019 at 8:59 PM Post #12,415 of 18,241
I'm not sure about the context of that quote. There really wasn't any digital mastering in 1985, it didn't become even a practical possibility until a decade later and it was almost another decade before the mastering tools had improved to the point that mastering in the digital domain became a viable alternative. Remember that contrary to popular belief, the SPARS code (AAD, DDD, etc.) did not refer to the domain of the procedures but the domain of what those procedures were recorded to. For example, if we record the musicians to digital recording media, mix it in the analogue domain then record that final (analogue) mix to digital, then master in the analogue domain and record the completed master to digital, the SPARS code would be "DDD" (even though it's been both mixed and mastered in the analogue domain). If we're talking about the actual processes, then with the exception of a very few classical recordings (a couple of labels had proprietary digital systems and minimal mixing and mastering), pretty much all recordings up to the mid/late 1990's should have been labelled DAA, then gradually DDA and finally, DDD would have started appearing in the early 2000's.
G

Yes you are right, the quote was from Bob Clearmountain, not Ludwig - and he wasn't specifically referring to mixing or mastering. The quote is in the article below.

https://www.laweekly.com/music/why-cds-may-actually-sound-better-than-vinyl-5352162
 
Feb 2, 2019 at 1:38 AM Post #12,416 of 18,241
How many audiophiles? Some maybe, but they were on the outer. The vast majority of audiophiles were waxing lyricals about how much better CDs sounded compared to all that went before it.

I know because I was there in that era and right into hi fi. Went to many hi fi exhibitions where CD players were wowing the audiences while the turntables and cassette decks of the day were relegated to the side. Hi Fi mags were also waxing lyricals, apart from one or two writers like the crank Fremer.

So sorry, there were not "many" audiophiles complaining about the poor sound quality of CDs, quite the opposite. The complaints about sound quality of CDs really started from the mid to late 90s, when digital consoles were introduced into studios which enabled producers to crank up the loudness and compression.

Sorry, there might be few of us who reacted negatively to the sound of "perfect sound forever" initially - but WE DEFINITELY WERE !!!!

I never even heard of Fremer up to say a decade or so later...

My first encounter with what proved to be dreaded corruption of music called CD - ( or, in those early days, DAD ) has been at then our yearly consumer electronics show - Sejem Elektronike. It might have been 1981 or 1982 - only the first Philips and Hitachi players.have been on active demo display, no other manufacturer had one actually in production, let alone ready to be shown in public.

Since I NEVER trust sonic impression at fairs ( poor rooom acoustics, corny music, etc ), I did all I could in order to eliminate the former - certainly, I did not have any C discs at the time. So, Audio Technica ATH-7 electret headphones plus Van Alstine 120 C MOSFET power amp to power these were with me at both demos.

I can CLEARLY remember to this day the words one member of the Emona Commerce, importer and distributor for the Hitachi in Yugoslavia at the time, said to the other member of the staff while handing him the ATH-7s for him to listen to Hitachi CD player trough Van Alstine/ATH-7 : " Come... listen to the dreams...! "

Well, MAYBE the sound heard has been "dreams" for him - because, he clearly has not been familiar with anything even remotely of similar quality. My phono cartridge at the time has been Supex SD-900 Super - or possibly already Grado G1+; both run rings around CD, but particularly around the first CD machines.

This has been happening about an hour or two after similar thing occurred at the Philips boot - with Philips themselves making the demo. The Philips guy, after hearing for himself his demo piece exposed for what it really was, made sure I leave ASAP ...

For these demos, one particular recording/album needs to be stressed : Dire Straits - Love over Gold. For the initial release, Philips made each and every effort imaginable to make BOTH the new CD and LP version to sound best they can. It has been clearly aimed at proving the CD eclipses the LP. This strategy backfired - royally so. In fact, the best turntables of the day ( remember, Dynavector Karat Diamond cartridge has been three years old at the time .. ) made such a mockery out of ANY then available CD that was laughable... Philips won't admit to have ORDERED BACK any copy of the original LP release of Love over Gold they could possibly still get back from the vendors - in order ro replace it in the market with the "doctored" "remaster or whatever" - which finally "proved" that CD sound better than LP.

Yeah, RIGHT - but only IF you are big enough to have the power to pull off such a fast one - and get away with it.
 
Feb 2, 2019 at 3:09 AM Post #12,418 of 18,241
Every setting and calibration in analog recording changes the sound, there is no setting that is transparent. 30 IPS gives better SN at the loss of low frequency response, 15 IPS gives better lows but you lower SN. That is just one the dozens of tradeoffs in analog recording. I can't even tell how many thousands of hours I spent calibrating tape machines It is first thing you do every session, then again when you change reels.
You have wow and flutter, tape compression, print-through, oxide loss, generational loss, poor crosstalk, and many other losses and distortions. Vinyl adds a whole other layer of losses and distortion on top of that.

High Com was on Akai decks in the 80's. I played with it a little. By played with, I mean tested multiple tape types, brands, recording levels, a-b'd to the source. Like all companding systems it had trade offs. No matter what noise reduction system you use you cannot get around tape compression and all the mechanical issues. It was a consumer systems anyway so not of any use in a studio. Noise Reduction systems peaked with Dolby SR which never gained acceptance in music production but was popular in film production. SN is more important in film than fidelity.

Plug mic preamps directly into any Studer, Ampex, Stevens, Otari, 3M tape machine, listen to what goes in, listen to what comes out, if you are lucky you get 80% of what went in. Lifeless compressed, soft crap, a faint shadow of the performance. The artifacts are unbearable.
 
Feb 2, 2019 at 4:14 AM Post #12,419 of 18,241
Agreed. Some limitations cited are real.

Some apply only if you are strictly USER - and use commercially available equipment in stock form.

You would have been shocked to have demonstrated even stock properly functioning Technics RS-AZ 7 cassette deck - NO commercially available and used in studio R2R had amorphous / magnetoresistive heads - hence poor bass and channel separation/high crosstak in studio machines. No stock equipment I know of had its MPX filters TRULLY out of the circuit when set to "MPX filter off" position - hence poor extension above 20 kHz and unnecessary phase errors in treble, etc, etc. These errors ARE audible.

No commercially available High Com had its most glaring shortcomings removed ... you had to do it by yoursellf.

It was, basically, NOT the limitation by the technology itself, it has been more the inability to utilize the capabiliies of the technology to the fullest.
 
Feb 2, 2019 at 5:01 AM Post #12,420 of 18,241
Even if the best analog system can meet or surpass the capabilities of digital :thinking:, I'm failing to understand the point/practicality of it.

To my understanding, this technology has gone the way of the Betamax...it's not used by the industry and, therefore, has no real value to someone that is interested in listening to a broad selection of music.

While I'm a fan of Knopfler / Dire Straits, I'm not sure I want to listen to 'Love over Gold' on repeat for eternity :wink:

Feel free to educate me if I'm missing something...
 

Users who are viewing this thread

Back
Top