Why 24 bit audio and anything over 48k is not only worthless, but bad for music.
Nov 20, 2017 at 3:01 PM Post #2,671 of 3,525
I was speaking in regard to visually captured frequencies, I know that many kinds of waves can interfere with audio recording. Is it actually the mic's transducer picking up the radio waves or is it creating noise inside the circuitry?

In regards to the visual stuff, the light sensors in some cameras can actually pick up quite a bit more than just visual light. I know of a few that are very sensitive into infrared, and with a few hacks to remove the IR filter over the sensor, and with a visible light filter over the lens (some will allow different nm lengths of light adjustable on a ring) can create beautiful photos such as this:

P5242330_edit2_red.jpg
 
Nov 20, 2017 at 3:35 PM Post #2,672 of 3,525
Again, obviously we look at things differently......

For example..... yes, camera sensors, and lenses, impose their own limitations.
Therefore, the best possible version of the information I can get from my camera in digital form is that RAW file.
(I can obsess at another level how much I want to spend on my camera and lenses.)
Therefore, MY only choice is to either keep EVERYTHING the camera lets me have, decide myself to discard some of it, or let someone or something else make that decision for me.
Given those choices, keeping everything seems like the safest choice.
(Deciding with absolute certainty what belongs there and what is an artifact of the camera is beyond my current capabilities.)

Likewise, if I hear hiss in a recording, I can't know FOR SURE if it's hiss from the microphone preamp, a noisy steam radiator, of the air hissing in a pump-powered organ.
If it's microphone preamp hiss - then it's probably "an artifact".
But, if it's the hiss of the air in the organ pipes, then it actually belongs there - as part of the original experience.
And, if it's the radiator, then only the recording engineer knows whether he intended it to be there or not (maybe it's part of his "artistic vision").
Therefore, I'd rather keep it - at least in my master copy - than ASSUME I know it is an artifact.

I tend to divide the world into "things I have control over" and "things I don't".
In the case of music, everything up until the output of the mixing console is "whatever the mixing engineer says it should be".
(So, if he didn't remove that hiss, then I guess maybe he wants it there.... or maybe not.... but I'll never know for sure.)
It's part of the PRODUCTION rather than of the REPRODUCTION.
But, once the music has been PRODUCED, then the goal is to REPRODUCE it.
To me it seems obvious that part of the process should start with an exact copy of what the mixing engineer intended me to have.
I may CHOOSE to discard parts of it, or alter parts of it, but I don't want equipment that limits my choices by being UNABLE to reproduce all of it, or that eliminates parts of it without asking me.

I agree that stereo is a compromise....... but I guess I've just grown to those particular compromises.
(Surround sound also causes compromises of a different sort.)
We've also got the situation that the mix engineer has made that choice for me........ (either it was recorded and mixed in surround or not - so that's outside of my control).

I want to "over-capture" because it saves the most information.....
And, yes, sometimes there is a situation where the information "grows" because of our inability to retrieve it optimally.

To use the photo example......
I may take a photo at a certain resolution...
And then have that photo printed as a dot-lithograph... (let's assume we print it at 200 dpi).
And, by an unfortunate circumstance, that lithograph may be the only copy I have (someone's dog ate the negatives).
Now I'm stuck with the limitation of that lithograph.
So, what resolution do I scan it at to do the best possible job of making a "new digital negative" of my image?
Some people might suggest that 200 dpi is plenty...
But it's not.
Because the dots on my scanner won't line up perfectly with the litho dots...
In fact, the best chance I'll have for the best possible quality will be to scan it at WAY over 200 dpi...
I'll scan it at 2400 dpi..... so I can actually see the shape of the dots from the lithograph...
That way each dot can be represented by dozens of scanned pixels, and the color proportions will come out just right.

My point about tape (and even cassette) is that there were people who insisted that both were good enough.
I knew one person who had a very expensive Nakamichi cassette recorder... he swore that it could make cassettes that were "indistinguishable from the original".
My real point is that there is a long history of people claiming that something was "good enough" and being proven wrong.
I'd rather spend a little extra time and money rather than RISK making that mistake again.
(again, within what's within my ability to control....... )

An, no, that doesn't mean that I don't enjoy a poor quality recording of a great performance....
It just means that I'd enjoy it MORE if the quality was better.
(And, yes, listening to a poor quality recording, and wondering if the other version I didn't buy sounds better, would make me unhappy.....)

The world we live in is not binary. A binary thinker that tries to deal with that world will always be compromising his goals. I choose to be happy with grays, because they might just be the exact grays the world has in it.
But you actually aren't looking at the real original! That's an acoustic event. Your "perfect" 24/96 copy isn't the original either, it's been through at least two transducers that have radically modified it, then it's been digitized and reconstructed. You've lost far more audible information in all of that than you would with 320kbps AAC (sorry, mp3 is so yesterday). Change a mic, change a speaker, change the room...radical and unmistakable difference. Compare your 24/96 to the 320k AAC...not so much, in fact, none at all. Pick your means of loss, obsess on the tiny ones if you like, the huge ones are far more in your life.
Not binary at all. Camera sensors generate noise that's not part of the image. They add something you really don't want. There's no reason to keep what you don't want and wasn't part of the original.
Yes, but it's a question of degree, and end use. Nobody that shoots film presents his negatives in a display because they are not usable that way. They must be converted to positive images first, and that process discards information. RAW files cannot be displayed, they must be converted first and that process discards information. Always. It's just a question of what information you choose to discard. It's subjective.
Totally agree, because I have gone back to RAW files and reprocessed them for alternate goals. No argument, but also, no parallel either.
Oh sure, that's true. But I don't shoot RAW+JPG. I shoot RAW only, create the jpg if needed using my own judgement.
You might give it a try. Stereo is fatally flawed in many ways that mult-channel is not. That's been known since the original Bell Labs experiments, but we got stuck with two channels for economic/practical reasons. 3 was the absolute minimum Bell arrived at. You might not like the "in the band" perspective, I usually don't. But as far as immersing yourself in an event, give it a try. Stereo is NOT more pure, it's loaded with compromise.
Ok, that's fine. But I said 23kHz. and "time smear" audibility is unproven. When it is proven, I'll be on that train.
I'm not, but if you buy music someone else has recorded, you're stuck with whatever they did, and if it's not using mics flat to over 30kHz and released unfiltered at 24/96 you won't be happy. I differ. I listen to their art for the enjoyment, and I'm happy with many different sample rates.
I must take exception here. As an audio professional for over 45 years, I NEVER considered cassettes "good enough". They were the best most consumers could access. I NEVER considered open reel tape "good enough", and I worked with some of the best pro reel machines ever made, and the best noise reduction systems, and the best mics, mixers...everything. Tape was NEVER good enough, it was at one time the best we had. JPG images when introduced were never "good enough", the too were the best digital camera images we had. At that time 35mm film was much, much better...still not "good enough". I saw an exhibit of Ansel Adams original prints...huge ones...and even they showed that 8x10 negatives were not quite "good enough". So what? I thoroughly enjoyed them, and images and recordings in all of those formats.
But what if that bat is the only sound over 23kHz, the rest of that spectrum being random noise? You still want to over-capture without understanding the original signal.
Part of the master, yes. Part of the original, no. And if it's part of the master in an area where none of the original information exists, it's a defect, a distortion. The only way you're removing it is to limit bandwidth, there are not a "number of ways" to eliminate ultrasonic noise.
 
Nov 20, 2017 at 3:43 PM Post #2,673 of 3,525
As far as I know, while certain really obscure microphones can detect audio (vibration) up into the low RF frequencies, maybe as high as 2 mHz, they are few and far between, and absurdly expensive. What you normally encounter is that something in the preamp, usually a transistor junction, is "detecting" the radio signals (extracting the audio frequency from the carrier), the audio portion then leaks into the audio circuitry, and that's what you're hearing. If you have one of those old transistor radio crystal earphones, you can sometimes hear AM radio simply by touching the wire to a big piece of metal (try a chain link fence). The piezoelectric "crystal" element in the earpiece acts as both the detector and the speaker.

Near IR photos can be lovely.

Actually, as far as I know, all current camera sensors are fully sensitive to near-IR.
On the better cameras, there is a filter to block those frequencies because they interfere with the normal light image.
(If you add the image below to a full color version of the same image the IR information will make the trees look washed out and milky.)
Most cameras can have the filter removed - although it can be complicated - and you risk ruining an expensive camera (you need to add a piece of clear glass to avoid compromising the autofocus capabilities).
LifePixel is one company who sells commercially modified cameras - and do the modifications (I have a Nikon d40x they modified).
(they also have a big gallery of pictures on their website)

It would be really nice if the camera could actually record all the wavelengths - so you could choose the ones you want to use afterwards.
Unfortunately, cameras only "see" in three colors, so the IR is seen mostly by the red sensor.... and there's no way to separate them.
(You block the visible red with an IR filter - and so get a picture of just the IR....... but there's no way to photograph both, then later choose which one alone you want.)

I was speaking in regard to visually captured frequencies, I know that many kinds of waves can interfere with audio recording. Is it actually the mic's transducer picking up the radio waves or is it creating noise inside the circuitry?

In regards to the visual stuff, the light sensors in some cameras can actually pick up quite a bit more than just visual light. I know of a few that are very sensitive into infrared, and with a few hacks to remove the IR filter over the sensor, and with a visible light filter over the lens (some will allow different nm lengths of light adjustable on a ring) can create beautiful photos such as this:

P5242330_edit2_red.jpg
 
Last edited:
Nov 20, 2017 at 4:13 PM Post #2,674 of 3,525
I see no purpose to this end of the conversation.
 
Nov 20, 2017 at 4:21 PM Post #2,675 of 3,525
Again, obviously we look at things differently......

For example..... yes, camera sensors, and lenses, impose their own limitations.
Therefore, the best possible version of the information I can get from my camera in digital form is that RAW file.
(I can obsess at another level how much I want to spend on my camera and lenses.)
Therefore, MY only choice is to either keep EVERYTHING the camera lets me have, decide myself to discard some of it, or let someone or something else make that decision for me.
Except that's not how this works. Your RAW is not displayable at all. Just to see it and make your judgement of what you think you want to discard you have to throw information out.
Given those choices, keeping everything seems like the safest choice.
I've never once argued against keeping the RAW image. It's what I do.
(Deciding with absolute certainty what belongs there and what is an artifact of the camera is beyond my current capabilities.)
But it's actually not. You just need to learn what to look for. What you're doing when you process a RAW file for display is making that choice, either you or your computer, but it's already being made, and quite intelligently.
Likewise, if I hear hiss in a recording, I can't know FOR SURE if it's hiss from the microphone preamp, a noisy steam radiator, of the air hissing in a pump-powered organ.
If it's microphone preamp hiss - then it's probably "an artifact".
But, if it's the hiss of the air in the organ pipes, then it actually belongs there - as part of the original experience.
And, if it's the radiator, then only the recording engineer knows whether he intended it to be there or not (maybe it's part of his "artistic vision").
Therefore, I'd rather keep it - at least in my master copy - than ASSUME I know it is an artifact.
You've once again missed the point entirely. If the noise is from an instrument, it's probably not an artifact, it's made by the instrument. If the noise is form electronics, it's not part of the acoustic event. If the noise is audible, it's a flaw, a defect, but likely nothing we can do anything about. If the noise is ultrasonic, it's not part of the original event AND something we can eliminate.
I tend to divide the world into "things I have control over" and "things I don't".
In the case of music, everything up until the output of the mixing console is "whatever the mixing engineer says it should be".
(So, if he didn't remove that hiss, then I guess maybe he wants it there.... or maybe not.... but I'll never know for sure.)
It's part of the PRODUCTION rather than of the REPRODUCTION.
But, once the music has been PRODUCED, then the goal is to REPRODUCE it.
To me it seems obvious that part of the process should start with an exact copy of what the mixing engineer intended me to have.
I may CHOOSE to discard parts of it, or alter parts of it, but I don't want equipment that limits my choices by being UNABLE to reproduce all of it, or that eliminates parts of it without asking me.
That's fine. As when I have my engineer hat on, I make those decisions all the time, and you already know what I'm going to do.
I agree that stereo is a compromise....... but I guess I've just grown to those particular compromises.
(Surround sound also causes compromises of a different sort.)
We've also got the situation that the mix engineer has made that choice for me........ (either it was recorded and mixed in surround or not - so that's outside of my control).
You need to study this a bit more. You're implying that an engineer's surround mix is somehow wrong, and that's incorrect. Surround sound includes a rather well standardized speaker layout that even you can follow. And standard calibration. And even known play levels. With those tools alone you can stand a far, far better chance of hearing what the engineer heard, and that is, in fact, the goal of some higher end home AV systems. That goal does not, and cannot exist in two channel stereo. And we're ignoring all the rather important issues of phantom imaging (or the lack of). I'm surprised that you don't understand all of this given your connection with your employer.
I want to "over-capture" because it saves the most information.....
And, yes, sometimes there is a situation where the information "grows" because of our inability to retrieve it optimally.
And I want to over capture too, to preserve as much of the information of the original event as possible. I don't want to capture more of what wasn't there in the first place.
To use the photo example......
I may take a photo at a certain resolution...
And then have that photo printed as a dot-lithograph... (let's assume we print it at 200 dpi).
And, by an unfortunate circumstance, that lithograph may be the only copy I have (someone's dog ate the negatives).
Now I'm stuck with the limitation of that lithograph.
So, what resolution do I scan it at to do the best possible job of making a "new digital negative" of my image?
Some people might suggest that 200 dpi is plenty...
But it's not.
Because the dots on my scanner won't line up perfectly with the litho dots...
In fact, the best chance I'll have for the best possible quality will be to scan it at WAY over 200 dpi...
I'll scan it at 2400 dpi..... so I can actually see the shape of the dots from the lithograph...
That way each dot can be represented by dozens of scanned pixels, and the color proportions will come out just right.
You might consider not doing any more imaging analogies, you don't seem to understand printing or image processing. I get what you're trying to say, but your analogy is horrendously out of touch with reality.
My point about tape (and even cassette) is that there were people who insisted that both were good enough.
I knew one person who had a very expensive Nakamichi cassette recorder... he swore that it could make cassettes that were "indistinguishable from the original".
My real point is that there is a long history of people claiming that something was "good enough" and being proven wrong.
I'd rather spend a little extra time and money rather than RISK making that mistake again.
(again, within what's within my ability to control....... )
Here's where we differ a lot. You don't seem to feel comfortable with your understanding of what is and is not audible. In the tape days it was easily provable at any point in time that the system was not perfect or even adequate for capturing the original. Anyone making the claim above was deluded (though clearly very happy with his gear). The medium was vastly worse than its input signal. Today we can still easily prove what's audible and what's not. Our digital medium is also measurable, provable, and verifiable as to its efficacy and deficiencies. You seem to think that knowledge doesn't exist. So you slap more resolution on it as a solution, when it solves nothing, and creates further potential problems.

You admit that the production process is out of your control. So, then, why are we even arguing? You'll take what we give you and like it...or hate it (I'm sure there's no neutral).
An, no, that doesn't mean that I don't enjoy a poor quality recording of a great performance....
It just means that I'd enjoy it MORE if the quality was better.
(And, yes, listening to a poor quality recording, and wondering if the other version I didn't buy sounds better, would make me unhappy.....)
Pretty sure we've both bought multiple version of material hoping for something better or different. That goes back the the tube/vinyl days, nothing new there. One of my favorite pieces of music was recorded in the 1950s, and the original RCA pressings were only fair. I bought several pressings, none were good. Decades later I got the CD. Guess what? The same distortion I didn't like on vinyl was perfectly preserved in bits! They pushed the record level into tape saturation.

Ultimately you have two choices: take it or leave it. Hey! That's a binary decision! You should have no trouble with it.
 
Last edited:
Nov 20, 2017 at 4:24 PM Post #2,676 of 3,525
Again, obviously we look at things differently......

For example..... yes, camera sensors, and lenses, impose their own limitations.
Therefore, the best possible version of the information I can get from my camera in digital form is that RAW file.
(I can obsess at another level how much I want to spend on my camera and lenses.)
Therefore, MY only choice is to either keep EVERYTHING the camera lets me have, decide myself to discard some of it, or let someone or something else make that decision for me.
Given those choices, keeping everything seems like the safest choice.
(Deciding with absolute certainty what belongs there and what is an artifact of the camera is beyond my current capabilities.)

..........

I want to "over-capture" because it saves the most information.....
And, yes, sometimes there is a situation where the information "grows" because of our inability to retrieve it optimally.

To use the photo example......
I may take a photo at a certain resolution...
And then have that photo printed as a dot-lithograph... (let's assume we print it at 200 dpi).
And, by an unfortunate circumstance, that lithograph may be the only copy I have (someone's dog ate the negatives).
Now I'm stuck with the limitation of that lithograph.
So, what resolution do I scan it at to do the best possible job of making a "new digital negative" of my image?
Some people might suggest that 200 dpi is plenty...
But it's not.
Because the dots on my scanner won't line up perfectly with the litho dots...
In fact, the best chance I'll have for the best possible quality will be to scan it at WAY over 200 dpi...
I'll scan it at 2400 dpi..... so I can actually see the shape of the dots from the lithograph...
That way each dot can be represented by dozens of scanned pixels, and the color proportions will come out just right.

My point about tape (and even cassette) is that there were people who insisted that both were good enough.
I knew one person who had a very expensive Nakamichi cassette recorder... he swore that it could make cassettes that were "indistinguishable from the original".
My real point is that there is a long history of people claiming that something was "good enough" and being proven wrong.
I'd rather spend a little extra time and money rather than RISK making that mistake again.
(again, within what's within my ability to control....... )

An, no, that doesn't mean that I don't enjoy a poor quality recording of a great performance....
It just means that I'd enjoy it MORE if the quality was better.
(And, yes, listening to a poor quality recording, and wondering if the other version I didn't buy sounds better, would make me unhappy.....)

You are conflating the demands of an acquisition or editing format to that of a delivery format. In terms of this ongoing (and often irrelevant) comparison to film, imagine going down to Best Buy, picking up a copy of your favorite film, and finding that they delivered the movie to you on a reel of undeveloped negatives. All of the data is right there! Completely intact! Not a single missed color, not a compression artifact in sight… but completely useless to an end consumer.

The need to preserve data through generational loss (which really isn’t a problem with digital audio anyway) isn’t a consumer issue, it’s a production issue. Retaining data to make editing decisions later. Consumers don’t need to make those decisions, don’t need to second guess the colorist, don’t need to wonder if a scene should have a lower gamma level. For a delivery codec, you find out what the perceptual limits are for 99.9% of people, you find out what your storage or delivery limit is, and you hit that target.

You are constantly trying to make your point via allegory, and those allegories are false and misleading. Like when they tell you the protective coating is an invisible bubble of protection around your vehicles paint. It's not a bubble. It's not protective. It's not even there. It's story telling.
 
Nov 20, 2017 at 4:33 PM Post #2,677 of 3,525
As far as I know, while certain really obscure microphones can detect audio (vibration) up into the low RF frequencies, maybe as high as 2 mHz, they are few and far between, and absurdly expensive. What you normally encounter is that something in the preamp, usually a transistor junction, is "detecting" the radio signals (extracting the audio frequency from the carrier), the audio portion then leaks into the audio circuitry, and that's what you're hearing. If you have one of those old transistor radio crystal earphones, you can sometimes hear AM radio simply by touching the wire to a big piece of metal (try a chain link fence). The piezoelectric "crystal" element in the earpiece acts as both the detector and the speaker.
No point here. We have transducers that exceed human capabilities. Have had for a very long time. We don't use them to record music.
Near IR photos can be lovely.
They are artistic, using different tools.
Actually, as far as I know, all current camera sensors are fully sensitive to near-IR.
On the better cameras, there is a filter to block those frequencies because they interfere with the normal light image.
No, ALL color cameras must have IR filters.
(If you add the image below to a full color version of the same image the IR information will make the trees look washed out and milky.)
Most cameras can have the filter removed - although it can be complicated - and you risk ruining an expensive camera (you need to add a piece of clear glass to avoid compromising the autofocus capabilities).
LifePixel is one company who sells commercially modified cameras - and do the modifications (I have a Nikon d40x they modified).
(they also have a big gallery of pictures on their website)
That's art, not accurate reproduction. You certainly must know the difference.
It would be really nice if the camera could actually record all the wavelengths - so you could choose the ones you want to use afterwards.
A camera that records all the wavelengths would't reproduce a visible picture. What you actually want is a camera sensor that behaves like a perfect retina. Any more is just art.
Unfortunately, cameras only "see" in three colors, so the IR is seen mostly by the red sensor.... and there's no way to separate them.
(You block the visible red with an IR filter - and so get a picture of just the IR....... but there's no way to photograph both, then later choose which one alone you want.)
A camera with it's IR filter removed and no visible block filter photographs both visible and IR.

Each RGB color channel can be easily separated in software.
 
Nov 20, 2017 at 6:03 PM Post #2,678 of 3,525
1)
Actually, no, in the past, many cameras omitted the filters.
Up around when commercial hand-held cameras passed the 1 mP point most of them started including the filter.
Up until last year, the cameras in many phones omitted them (that's why you can test your IR remote control by pointing it at the camera on your phone).
In the last two or three years most of the better phones include the filter.
(Many security cameras specifically are still intended to work in B&W with an IR light source at night.)

2)
IR images are art... obviously.

3)
Your retina is able to "see" all visible frequencies by being sensitive to three specific bands.
It does not detect three specific frequencies, but detects three rather wide bands of frequencies, each CENTERED around red, green, and blue respectively.... and your brain applies a sort of image analysis to figure out what's going on.
With cameras, it was determined that, by using sensors that detect those same three frequency ranges, and phosphors that emit them, a facsimile could be reproduced that would fool our eyes.
However, the image itself is far from accurate.

If we had both a camera that stored all visible frequencies accurately, and a monitor that reproduced them, then it would work perfectly - and would look exactly like the original.
You would also get the same results as from the original if you used IR or UV filters.
And, since the image would contain all frequencies there in the original:
a) it would look exactly like the original when viewed by a human retina (or any other sensor)
b) unlike current photographs, you would also be able to decide whether to view red, green, blue, infrared, or any combination of light colors in the resulting image
c) you would also get a proper rainbow when you passed white from it through a prism (when you pass white light from a current image through a prism you get three stripes - one in each primary color - which is INCORRECT)

The current tri-stimulus system is a very inaccurate compromise - but it was designed to work very well when used ONLY with the human retina as a sensor.
(Arguably it is a very good example of perceptual encoding.)

You can separate the R, G, and B ...... but you cannot separate the near IR - because it's using the red sensor.
(In a "full spectrum system" you would be able to filter any color or range of colors you wanted to.)

No point here. We have transducers that exceed human capabilities. Have had for a very long time. We don't use them to record music.
They are artistic, using different tools.
No, ALL color cameras must have IR filters.
That's art, not accurate reproduction. You certainly must know the difference.
A camera that records all the wavelengths would't reproduce a visible picture. What you actually want is a camera sensor that behaves like a perfect retina. Any more is just art.

A camera with it's IR filter removed and no visible block filter photographs both visible and IR.

Each RGB color channel can be easily separated in software.
 
Nov 20, 2017 at 7:11 PM Post #2,679 of 3,525
1)
Actually, no, in the past, many cameras omitted the filters.
Up around when commercial hand-held cameras passed the 1 mP point most of them started including the filter.
Up until last year, the cameras in many phones omitted them (that's why you can test your IR remote control by pointing it at the camera on your phone).
In the last two or three years most of the better phones include the filter.
(Many security cameras specifically are still intended to work in B&W with an IR light source at night.)

2)
IR images are art... obviously.

3)
Your retina is able to "see" all visible frequencies by being sensitive to three specific bands.
It does not detect three specific frequencies, but detects three rather wide bands of frequencies, each CENTERED around red, green, and blue respectively.... and your brain applies a sort of image analysis to figure out what's going on.
With cameras, it was determined that, by using sensors that detect those same three frequency ranges, and phosphors that emit them, a facsimile could be reproduced that would fool our eyes.
However, the image itself is far from accurate.

If we had both a camera that stored all visible frequencies accurately, and a monitor that reproduced them, then it would work perfectly - and would look exactly like the original.
You would also get the same results as from the original if you used IR or UV filters.
And, since the image would contain all frequencies there in the original:
a) it would look exactly like the original when viewed by a human retina (or any other sensor)
b) unlike current photographs, you would also be able to decide whether to view red, green, blue, infrared, or any combination of light colors in the resulting image
c) you would also get a proper rainbow when you passed white from it through a prism (when you pass white light from a current image through a prism you get three stripes - one in each primary color - which is INCORRECT)

The current tri-stimulus system is a very inaccurate compromise - but it was designed to work very well when used ONLY with the human retina as a sensor.
(Arguably it is a very good example of perceptual encoding.)

You can separate the R, G, and B ...... but you cannot separate the near IR - because it's using the red sensor.
(In a "full spectrum system" you would be able to filter any color or range of colors you wanted to.)
I am, and I suspect just about everyone else is, now lost on what point you are trying to make. None of the above has anything to do with the topic, "Why 24 bit audio and anything over 48k is not only worthless, but bad for music."

I've attempted to clarify the principles of engineering, science and application. You have been, and and keep on posting inapplicable analogy after inapplicable analogy, with wild concepts of pseudoscientific theory.

I must honestly thank you for the entirely new (and rather opposite of my former) view I now have of the company you represent, Emotiva. I formerly thought of it as a company grounded in practical applications here on Earth. I admit to being disappointed, but oh well, live and learn to recognize futility. I believe I've done both.
 
Nov 20, 2017 at 8:44 PM Post #2,680 of 3,525
I really can't believe how irrelevant this thread has gotten. What do cameras and gamma rays and all this stuff have to do with 24 bit audio being overkill? I'm getting to the point where there is nothing in here that is worth reading any more. As for the subject title, there's a link in my sig file that covers that completely.
 
Nov 21, 2017 at 10:33 AM Post #2,681 of 3,525
I'm not sure exactly what you mean. I have several image editing programs that will happily display the RAW images from all of my cameras.... as will most of my image viewers.
And, while my Nikons have a pretty good dynamic range, a good modern TV can match it (DSLRs don't tend to have an excessively wide dynamic range - although some of the new video cameras are much better).
But, yes, I always convert them to something else "for distribution".... because a lot of displays won't.... and, having the part of "the recording engineer", it is my decision to make.
(And there are also a few situations where no current camera can handle the dynamic range - then we use HDR.)

And, again, we're back to my original point....
How do you KNOW which noise was made by the instrument and which was an artifact?
As a recording engineer, MAKING music, you can easily decide if you LIKE a particular sound or not.
However, as someone arranging to REPRODUCE it, I (or the customer) should not.

I have one recording that, in one part, has an odd little noise that sounds very much like a midrange with a jangled voice coil.
However, it plays the same on multiple speakers, so it's really part of the recording.
In fact, when you turn it up, and listen carefully enough, you can tell it's something vibrating on the drum kit.
I'm not sure why the recording engineer left it there - but I'm glad that it didn't mysteriously disappear when it passed through my equipment - because then my reproduction of that recording would be incorrect.
I may DECIDE to make a copy with that noise edited out - but I want to make that decision consciously if at all.

I never implied that any given surround sound mix is wrong.
In fact, by definition, WHATEVER the engineer did is "right" (which doesn't mean that I have to like it).
However, strictly speaking, other than possibly binaural, both stereo and surround sound fall short of being an accurate reproduction of the original.
The original was a bunch of sounds, produced by a variety of different instruments, each of which interacts differently with the room.
Neither a stereo pair of speakers nor a set of surround sound speakers can reproduce the complex radiation and reflection patterns accurately.
Therefore, both are a compromise, where we do our best to do what we can accurately, correct for or null out the obvious discrepancies, and hope for the best.
But, no, I have never heard a system, at any price, or with any level of detailed setup, where I could honestly say:
"If I dragged the conductor of that performance into my listening room he would not be able to tell whether this was a recording or a live performance."
I would consider both the stereo and surround sound mixes to be "artistic renditions of the original" - so the best we can do is to reproduce the mixing engineer's intent there.

We seem to be agreeing that no recording is perfect...
Therefore, it would seem logically obvious that it must follow that "there's room for improvement"...
In fact, I suspect that I more or less agree with your priorities (except that I prefer stereo to surround).
However, I'm also not prepared to declare that any area is "so perfect there is no possible room for improvement".

I think our biggest point of disagreement may be in your "faith" about modern equipment.
I've owned a lot of DACs..... many of them sound very similar... and many sound very obviously different.
In some cases, the differences in sound can be clearly traced to specific obvious differences in specifications, but in others it seems less so.
For example, MOST of the units I've owned that used the Sabre DAC chip have had a distinctive sound....
(Since the company who designed the chip originally claimed that they'd chosen their filter characteristics based on "what people liked in focus groups rather than what was the most numerically accurate", I see no surprise there.)
My problem is that, of the two dozen or so DACs I've owned, about half of them had a distinctive sound signature of one sort or another.
And, while I think it might be interesting to do the research to figure out where those differences come from, I'm too lazy to do so.
However, from my experience, it would be untrue to say that "all modern DACs sound the same" or even "MOST modern DACs sound the same".
I'm much more comfortable saying that "some modern DACs sound the same, and many sound quite similar, but you shouldn't ASSUME that a given one does without finding out for yourself".
(Sadly, I'm forced to suggest that people find out for themself, even though we all know people have all sorts of biases and odd notions, because reviewers seem even more prone to odd opinions, and the commonly available specs don't seem to tell us quite enough - at least not yet.)

Except that's not how this works. Your RAW is not displayable at all. Just to see it and make your judgement of what you think you want to discard you have to throw information out.
I've never once argued against keeping the RAW image. It's what I do.
But it's actually not. You just need to learn what to look for. What you're doing when you process a RAW file for display is making that choice, either you or your computer, but it's already being made, and quite intelligently.
You've once again missed the point entirely. If the noise is from an instrument, it's probably not an artifact, it's made by the instrument. If the noise is form electronics, it's not part of the acoustic event. If the noise is audible, it's a flaw, a defect, but likely nothing we can do anything about. If the noise is ultrasonic, it's not part of the original event AND something we can eliminate.
That's fine. As when I have my engineer hat on, I make those decisions all the time, and you already know what I'm going to do.
You need to study this a bit more. You're implying that an engineer's surround mix is somehow wrong, and that's incorrect. Surround sound includes a rather well standardized speaker layout that even you can follow. And standard calibration. And even known play levels. With those tools alone you can stand a far, far better chance of hearing what the engineer heard, and that is, in fact, the goal of some higher end home AV systems. That goal does not, and cannot exist in two channel stereo. And we're ignoring all the rather important issues of phantom imaging (or the lack of). I'm surprised that you don't understand all of this given your connection with your employer.
And I want to over capture too, to preserve as much of the information of the original event as possible. I don't want to capture more of what wasn't there in the first place.
You might consider not doing any more imaging analogies, you don't seem to understand printing or image processing. I get what you're trying to say, but your analogy is horrendously out of touch with reality.
Here's where we differ a lot. You don't seem to feel comfortable with your understanding of what is and is not audible. In the tape days it was easily provable at any point in time that the system was not perfect or even adequate for capturing the original. Anyone making the claim above was deluded (though clearly very happy with his gear). The medium was vastly worse than its input signal. Today we can still easily prove what's audible and what's not. Our digital medium is also measurable, provable, and verifiable as to its efficacy and deficiencies. You seem to think that knowledge doesn't exist. So you slap more resolution on it as a solution, when it solves nothing, and creates further potential problems.

You admit that the production process is out of your control. So, then, why are we even arguing? You'll take what we give you and like it...or hate it (I'm sure there's no neutral).

Pretty sure we've both bought multiple version of material hoping for something better or different. That goes back the the tube/vinyl days, nothing new there. One of my favorite pieces of music was recorded in the 1950s, and the original RCA pressings were only fair. I bought several pressings, none were good. Decades later I got the CD. Guess what? The same distortion I didn't like on vinyl was perfectly preserved in bits! They pushed the record level into tape saturation.

Ultimately you have two choices: take it or leave it. Hey! That's a binary decision! You should have no trouble with it.
 
Nov 21, 2017 at 11:31 AM Post #2,682 of 3,525
I'm not sure exactly what you mean. I have several image editing programs that will happily display the RAW images from all of my cameras.... as will most of my image viewers.
And, while my Nikons have a pretty good dynamic range, a good modern TV can match it (DSLRs don't tend to have an excessively wide dynamic range - although some of the new video cameras are much better).
But, yes, I always convert them to something else "for distribution".... because a lot of displays won't.... and, having the part of "the recording engineer", it is my decision to make.
(And there are also a few situations where no current camera can handle the dynamic range - then we use HDR.)

What he is trying to explain is that there is a background process that is converting your RAWs for display. Those image editing programs are displaying a conversion of the image that they've generated when you imported them. At no point can you see a RAW file, because all monitors on cameras, image editors, and even the drivers you can get to just view them in Windows, are performing a conversion and showing you a limited amount of the data contained in the RAW image.

With regard to Nikon, what are you even talking about? No, a TV can not match a D850, D750, D810, D500, D7500, D7200, or even a D3300 level of dynamic range. Frankly, you wouldn't want them to. The contrast level would be too low, the color grading and contrast decisions that go in to image and video processing make the video a lot more appealing to actually look at.

In regards to video cameras, the Red Helium 8k has 15.2 stops of dynamic range at base ISO, which does best any current Nikon (by less than half a stop of dynamic range at base ISO - 14.8 for the D850), but that is basically the king of dynamic range. The Red Epic ties the D850, does no better. I don't think anything from any company has better DR than the Helium 8k - though there are other things that make some cameras perhaps more desirable to some, the Helium is a monster for pure image quality. I mean, once you get all the necessary stuff to use it, you're north of $100,000, but you gotta pay to play!

But eyes and ears aren't comparable, and analogies between them always have a tendency to confuse the issue, not clarify it, when it comes to hifi.
 
Nov 21, 2017 at 12:01 PM Post #2,683 of 3,525
Digital audio is a lot like ventriloquism...

Big long irrelevant spiel about whether lighter composition dummies are better than heavier ones carved of wood "is it the same for headphone construction?", throwing your voice and perceived directionality of sound, moving your lips as an analogy for excursion of transducers, the philosophy of illusion and how it relates to subjective perception of sound and the placebo effect, audibility of artifacts and substituted consonants like B, P and V and F, speed and pacing of patter and voice switching and how it compares to micro timing in sampling rates, open and closed mouths are they like open and closed headphones? etc.

Someone replies explaining some misconception about ventriloquism without any reference to audio.

More irrelevent analogies and sidetracks into arcane ventriloquilia.

More explaining

Rinse and repeat 5X.

See! I can do it too!

But eyes and ears aren't comparable

They are both cute!

3d50038173b2fb89fc3f8cc76f3d924a.jpg
Just-a-little-fox-with-some-big-ears..jpg
 
Last edited:
Nov 21, 2017 at 12:05 PM Post #2,684 of 3,525
I'm sorry, but my point there was simply to point out that your assertion that "all cameras have filters" was in fact simply incorrect.

However, since you bring it up, I will take this opportunity to clarify Emotiva's "company line" on this subject.
(I should also mention that, since we really don't have a company line on the subject, what I've been posting here are mainly my opinions on the subject.)

We here at Emotiva don't produce or sell music... we sell hardware that reproduces music.
Therefore, we're just as happy if you buy the latest high-res remasters, or keep playing your old CDs, or subscribe to Tidal instead.
And we design the hardware we sell based on BOTH sound engineering principles AND the needs and desires of our customers.
Therefore, since many of our customers want to play high-res files and downloads, we design our equipment to do so...... and do it very well.
We're not going to try very hard to convince our customers that they will or will not hear a difference; whatever files they choose to listen to will sound good on our DACs.
You may even get a different opinion depending on who you talk to..... but we all agree that our goal is to make WHATEVER music you decide to play sound as good as it possibly can.
And you will find plenty of specs to back up the claim that the performance of our hardware is very good.

Individually, we have a wide variety of tastes and preferences, and that extends to music formats.
Some of us find high-res files to sound better often enough that we buy them; others are satisfied to stream music using Spotify; and yet others still like physical CDs; (a few even still like vinyl).
(So we're not really here to convince anyone either way.)

However, I do want to stress one point....... which seems to get lost in this discussion.
Our customers are NOT "paying extra for something they don't need".
You may pay extra when you purchase a high-res download.... but you're NOT paying extra to own a DAC that can play it.
All of the current high-quality DAC chips support 24/192k; and that includes the ones we use in our current products (and now most of the latest chips support 384k).
However, you're not paying extra for that high-res capability... it's simply "standard" on this year's good DAC chips.
(There are some really low-end DAC chips that don't support over 48k, but most of them are undesirable for other, and far more compelling, reasons.... and the cost difference is on the order of $5.)

I am, and I suspect just about everyone else is, now lost on what point you are trying to make. None of the above has anything to do with the topic, "Why 24 bit audio and anything over 48k is not only worthless, but bad for music."

I've attempted to clarify the principles of engineering, science and application. You have been, and and keep on posting inapplicable analogy after inapplicable analogy, with wild concepts of pseudoscientific theory.

I must honestly thank you for the entirely new (and rather opposite of my former) view I now have of the company you represent, Emotiva. I formerly thought of it as a company grounded in practical applications here on Earth. I admit to being disappointed, but oh well, live and learn to recognize futility. I believe I've done both.
 
Nov 21, 2017 at 12:08 PM Post #2,685 of 3,525
What he is trying to explain is that there is a background process that is converting your RAWs for display. Those image editing programs are displaying a conversion of the image that they've generated when you imported them. At no point can you see a RAW file, because all monitors on cameras, image editors, and even the drivers you can get to just view them in Windows, are performing a conversion and showing you a limited amount of the data contained in the RAW image.

With regard to Nikon, what are you even talking about? No, a TV can not match a D850, D750, D810, D500, D7500, D7200, or even a D3300 level of dynamic range. Frankly, you wouldn't want them to. The contrast level would be too low, the color grading and contrast decisions that go in to image and video processing make the video a lot more appealing to actually look at.

In regards to video cameras, the Red Helium 8k has 15.2 stops of dynamic range at base ISO, which does best any current Nikon (by less than half a stop of dynamic range at base ISO - 14.8 for the D850), but that is basically the king of dynamic range. The Red Epic ties the D850, does no better. I don't think anything from any company has better DR than the Helium 8k - though there are other things that make some cameras perhaps more desirable to some, the Helium is a monster for pure image quality. I mean, once you get all the necessary stuff to use it, you're north of $100,000, but you gotta pay to play!

But eyes and ears aren't comparable, and analogies between them always have a tendency to confuse the issue, not clarify it, when it comes to hifi.
that's also my understanding. cameras have come challenging the eye's dynamic range ability for a few years(also with a linearity films couldn't reach), while TVs are still struggling to have a dark black and a white that doesn't bleed over 10pixels.
also true about RAW, that it cannot be displayed without entering a color profile beforehand. usually what we see is what the camera would pick for a JPG version of it(or whatever setting we have in our app). there is a good deal more data available for manipulation and that's the greatness of RAW images, but the display is what it is.
 

Users who are viewing this thread

Back
Top