24bit vs 16bit, the myth exploded!
Dec 9, 2019 at 7:33 PM Post #5,356 of 7,175
1. Yes, it means how much louder a piece of music designed for
consumer consumption can be made, before distortion becomes excessive.
But you apparently don't know what it means because ...
1a. You are describing what YOU are "able to do" with YOUR equipment/volume
knob, not what all consumers can do with their equipment.

Thanks for proving my point but enough is enough and it's OFF-TOPIC!!!!

G

You just love to twist words. Your reputation as a mind-gamer and manipulator of words is already all over the internet, don't worry.

I'll get Barry, Bob, and Ethan back on here, and straighten all of you out!
 
Dec 9, 2019 at 7:35 PM Post #5,357 of 7,175
The best sounding Queen releases I've heard are the Night At The Opera SACD and the Queen Video Hits 1 & 2 DVDs. Have you heard them? I'm told there is a Japanese 15 album SACD series that is better than the CDs on some albums. I don't have that myself though. The Video Hits DVDs are a revelation. Fantastic multichannel mix. Couldn't sound better.


Of course you hawk the latest versions of Queen's albums and other material - you're a salesman!

Anyone who proclaims the original to sound better than the remaster is 'wrong' in yours and Gregorio's estimation.
 
Dec 9, 2019 at 7:51 PM Post #5,358 of 7,175
The Queen Video Hits 1 & 2 DVDs were released in 2002. The SACD was released in Japan originally around 2010. These were fresh remixes done for the audiophile market. The first CD release of Night At The Opera was in 1989. It isn't the best sounding version by a long shot. Until the SACD came out, The MFSL CD released in 1992 was the best. This album has been remastered for just about every medium and market multiple times over the years. I'm citing the versions I have heard that sound the best. I happen to know a little bit about the various releases of this music. I have the original LP release, the MFSL LP, the original CD release, the MFSL CD and the SACD of Night At The Opera. Each one has been an improvement on the one before.

You've got the idea that the first release of everything was the best sounding and remasters are all bad. You flat out don't know what you're talking about. It's a case by case basis depending on the album. You can't generalize like that.

Also, I wasn't talking to you. I was talking to the poster who was interested in the best sounding releases of Queen.
 
Last edited:
Dec 10, 2019 at 3:01 AM Post #5,359 of 7,175
[1] You just love to twist words.
[2] Your reputation as a mind-gamer and manipulator of words is already all over the internet, don't worry.
[3] I'll get Barry, Bob, and Ethan back on here, and straighten all of you out!
[4] Of course you hawk the latest versions of Queen's albums and other material - you're a salesman!

1. How does being a hypocrite help your case, especially in this subforum? You then wrote: "Anyone who proclaims the original to sound better than the remaster is 'wrong' in yours and Gregorio's estimation." - Which isn't just twisting words, it's an outright lie!
2. Sure it is.
3. Of course you will.
4. Of course he is, bigshot is obviously a salesman for Queen's record label .... and apparently also a salesman for all the other record labels of all the other remasters he's mentioned!

Why do you always choose this route: Thread cr@p any post/thread you think you can bend to your hobby horse of remasters, regardless of how off-topic. Base your arguments on ignorance and falsehoods, while dismissing the actual facts, regardless of the number of times and different ways they're explained to you. Dismiss those who know the facts as "salesmen" for every label ever to have issued a remaster. Resort to insults, hypocrisy or outright lies and progressively make yourself appear more and more foolish? And, you don't just do that here, according to you, you've done exactly the same thing on a number of other (pro audio) sites, with unsurprisingly exactly the same results! Are you doomed to forever making yourself appear ignorant and foolish and effectively being a thread cr@pper/troll? Will you never learn, or at the very least change your approach so you don't keep getting this exact same result?

For your own benefit, as well as the benefit of everyone else and this thread: ENOUGH, post on-topic or don't post at all!!!

G
 
Dec 10, 2019 at 10:06 AM Post #5,360 of 7,175
modo violence:
It's been going on for 2 pages, OP asked several times to get back on topic and should be somewhat in charge of his own thread(although @gregorio, trying to have the last word on the off topic is obviously going to trigger people to reply. As the philosopher said "let it go! let it go!").
Maybe there should be a limit like that for off topics, and after a given limit like 2 or 3 pages, I just mindlessly delete stuff and kick people out if they keep on feeding the same off topic?

Anyway. Back to bit depth please.
 
Dec 10, 2019 at 6:34 PM Post #5,361 of 7,175
I deleted 11 posts.
 
Dec 10, 2019 at 11:55 PM Post #5,363 of 7,175
Also, I wasn't talking to you. I was talking to the poster who was interested in the best sounding releases of Queen.

Lets not also forget the soundtrack to Bohemian Rhapsody. I bought it on streaming UHD, and the iTunes version is Dolby Atmos. The Live Aid scenes are really something to behold. It has a great sense of ambiance of panning around in "3D" (when it comes to audience and acoustics coming from around and above). It's a great comparison to a Queen BD recording I have of "Rock Montreal": where the main feature was filmed with 35mm (and not sure what source audio was). The visual and audio dynamics of the main feature is great. It also includes the recording of Live Aid: where the visual is pretty lacking and looks to be upscaled analog video....but audio is still good and worth it for any Queen fan to compare that original recording to the modern recreation.

When it comes to the whole topic of this thread, I think it's another academic subject that has no final answer. My favorite ever Sony Discman was 1bit processing, and I also have heard DACs I like that are 24bit upscaling: there's all different methods for processing, and you can have great sounding sound at 1bit, 16bit, 24bit, or 32bit. I'm more of an authority with visual computer graphics, and see that the premise of this thread was that it's easier to see bit depth with images. That's pretty simplistic: I think that reference was with early computers that would display monochrome (1bit) vs 4bit on up to what jpeg uses (8bit per channel). Now in this age, people are starting to understand HDR (much to my enjoyment as a photographer). Now consumers are arguing about 10bit per channel HDR vs Dolby Vision 12 bit per channel color depth (where there is "tone mapping"...adjusting contrast to native range)...visuals are rightly so needing precedence. However, with my background in photography, I do like full contrast range. I do find that some color grading for HDR tends to be too high contrast. (perhaps something analogous to compressed audio).
 
Last edited:
Dec 11, 2019 at 12:49 AM Post #5,364 of 7,175
Lets not also forget the soundtrack to Bohemian Rhapsody..

thanks for the tip. I haven’t seen that yet. I’ll check it out!
 
Dec 11, 2019 at 9:41 AM Post #5,365 of 7,175
[1] When it comes to the whole topic of this thread, I think it's another academic subject that has no final answer.
[2] My favorite ever Sony Discman was 1bit processing, and I also have heard DACs I like that are 24bit upscaling: there's all different methods for processing, and you can have great sounding sound at 1bit, 16bit, 24bit, or 32bit.
[3] I'm more of an authority with visual computer graphics, and see that the premise of this thread was that it's easier to see bit depth with images. That's pretty simplistic: I think that reference was with early computers that would display monochrome (1bit) vs 4bit on up to what jpeg uses (8bit per channel). Now in this age, people are starting to understand HDR ...

1. You're of course free to think whatever you want but the actual reality/facts prove that there IS a "final answer". I can understand how/why you could think there isn't though, which is why I'll respond to point 3 before point 2 ...

3. There are many similarities between digital photography and digital audio, as well as many similarities between how we see/perceive images and how we hear/perceive sound and therefore, we can potentially have many valid analogies between the two. However, there are also many differences between the two (some of which are quite profound) and therefore, potentially many analogies between them that are only partially valid and some that are quite profoundly invalid! I believe this is the trap you may have fallen into, which explains your conclusion of there being "no final answer". Unfortunately, I am NOT an authority with digital photography/computer graphics, so my terminology and description of digital imaging may not be entirely correct but I'm going to try to give a couple of examples:

One of the most major differences is what we're actually converting into digital data to start with. With photography our source "format" is light, which is waves/packets (photons) of electromagnetic energy, that we can convert into digital data with sensors. With audio, our source "format" is sound pressure waves, which is mechanical/kinetic energy travelling through a medium BUT, we can only convert electromagnetic energy into digital data with sensors, not mechanical energy, so we CANNOT convert sound into digital data! The solution is simple in theory and didn't need discovering because it already existed, nearly 150 years ago and 50 years before digital audio was first conceived: We first convert this mechanical energy into electromagnetic energy (specifically electricity), a process called transduction, then we can convert this electromagnetic energy to digital data and of course do the reverse conversion and transduction to reproduce the sound waves. However, this has consequences/limitations compared to digital imaging (which doesn't involve transduction) because transduction is highly inefficient (due to the laws of motion/kinetics) and therefore requires relatively massive amounts of amplification, which in turn causes even more limitations (due to other laws of physics, such as thermal noise).

Another major difference is the different response of our eyes and ears, for example: Our ears have a freq response range of about 20 kilo-hz and can resolve that range into about 10,000 different pitches. Our eyes have a freq response range of about 320 tera-hz and can resolve that range into about 10,000,000 different colours. So 16bit, which can represent ~65,000 different colours, is about 150 times fewer colours than the human eye can differentiate but about 6.5 times more pitches than the human ear can differentiate. So, 16bit is definitely "low-res" for the human eye and 24bit, with 16,000,000 colours, is about 1.6 times greater than required for visual "hi-res". However, 16bit for the human ear is already 6.5 times greater than required for audio "hi-res"! This isn't an entirely fair comparison though, because we do not use bits to directly represent the frequencies (pitches/colours) in digital audio but to represent the amplitude of the transduced electrical voltage (from which freq is derived), which is another of the differences between digital audio and digital imaging. So as mentioned above, this would effectively represent a "partially valid" analogy and demonstrates the dangers of digital visual and audio analogies! :)

2. There is no 24bit upscaling in digital audio! Maybe in digital imaging there is, maybe you can interpolate colours between the ~65,000 values available and write those colour values to the 16,000,000 (24bit) available values but digital audio doesn't work that way. If you "upscale" 16bit audio to 24bit nothing changes, there is no "upscaling", you just get 16bit audio in a 24bit container, the extra 8bits (LSBs) are just padded with zeros.

G
 
Dec 11, 2019 at 9:43 AM Post #5,366 of 7,175
Lets not also forget the soundtrack to Bohemian Rhapsody. I bought it on streaming UHD, and the iTunes version is Dolby Atmos. The Live Aid scenes are really something to behold. It has a great sense of ambiance of panning around in "3D" (when it comes to audience and acoustics coming from around and above). It's a great comparison to a Queen BD recording I have of "Rock Montreal": where the main feature was filmed with 35mm (and not sure what source audio was). The visual and audio dynamics of the main feature is great. It also includes the recording of Live Aid: where the visual is pretty lacking and looks to be upscaled analog video....but audio is still good and worth it for any Queen fan to compare that original recording to the modern recreation.

When it comes to the whole topic of this thread, I think it's another academic subject that has no final answer. My favorite ever Sony Discman was 1bit processing, and I also have heard DACs I like that are 24bit upscaling: there's all different methods for processing, and you can have great sounding sound at 1bit, 16bit, 24bit, or 32bit. I'm more of an authority with visual computer graphics, and see that the premise of this thread was that it's easier to see bit depth with images. That's pretty simplistic: I think that reference was with early computers that would display monochrome (1bit) vs 4bit on up to what jpeg uses (8bit per channel). Now in this age, people are starting to understand HDR (much to my enjoyment as a photographer). Now consumers are arguing about 10bit per channel HDR vs Dolby Vision 12 bit per channel color depth (where there is "tone mapping"...adjusting contrast to native range)...visuals are rightly so needing precedence. However, with my background in photography, I do like full contrast range. I do find that some color grading for HDR tends to be too high contrast. (perhaps something analogous to compressed audio).
Bits on a picture are allocated in a different way compared to bits in PCM. So reducing the number of bits will have a radically different impact and "bit" just becomes a false friend to anybody who isn't already familiar with how each digital system operates.
 
Dec 11, 2019 at 9:45 AM Post #5,367 of 7,175
Tell me more... I recall he piked out on James Randi challenge on cables but haven't heard of this one.

Sorry for the late response with this one, and I hope it's not too off-topic, castleofargh.
Although I find Michael Fremer to be one of the most obnoxious loudmouths in the entire audio world, I do actually mostly side with him on those two issues:

As far as I could read, the cable challenge with James Randi ended because Pear Audio wasn't willing to provide the cables, and no one else was willing to provide the Pear Cables either, and apparently James Randi didn't want to spend the $7000 on them himself. Fremer then suggested they use his own Tara Labs cables ($25,000). Randi first considered accepting this suggestion, but then in the end declined and announced that the challenge was over.
So Fremer actually didn't weasel out of it as far as I could understand. I think Randi had been told that the cables could change the sound, which is true - cables can change the volume level or the frequency response. Ethan Winer also mentioned this in his Null Tester video on Youtube, and someone on Hydrogen Audio also succesfully ABX'ed two sets of speaker cables. His subsequent measurement showed a marked difference in frequency response. The Audio Critic also published measurements of cables that showed different frequency responses.
So, I mention this because I think that what should have happened was that James Randi gave Fremer the chance to test the Monster cables against his Tara Labs, given that they first measured the cables and found that they measured the same within audibility for frequency response, volume level, capacitance, inductance and resistance. Remember that Fremer often brags about how he's able to hear things that can't be measured. Such a test could prove or disprove it.

As for the other story, Fremer has recounted this several times, as he was apparently quite traumatized by it. I wish he had submitted himself to a mental institution to get better and had stayed there since.
All jokes aside, apparently he and John Atkinson was blind testing amplifiers at an AES event, and Fremer got 5 amplifiers out of 5 correct. John Atkinson got 4 out of 5 correct. Stanley Lipshitz then proclaimed that they were just "lucky coins" and then dismissed their results.
If you want to see it in Fremer's own words, here's one of his rundowns (I've seen him tell the story, full of self-pity, several other times):

https://www.stereophile.com/content/blind-listening-letters

Again, what I think should have happened was that they did further testing. As far as I understand, the test was to only play each amplifier once, and then you could of course be a lucky coin. If they had done an ABX test with 16 trials for each amplifier it would have been very difficult to dimiss someone as a lucky coin if they had had a very good result.
Personally, I believe there are audible differences between many amplifiers (but not all), and this is most likely due to an altered frequency response because of the speaker load. As far as I know, very few amplifiers will remain flat within 0.1 dB when subjected to a real-world speaker load. And of course speaker loads differ from speaker to speaker. That would explain a lot about so-called "synergy".
There might also be other things than frequency response that affect the sound of an amplifier. I can't say. But Bob Carver's amplifier challenge with Stereophile was a very interesting read, as he found other sources of audible differences, although he was trying to emulate a tube amp, so distortion was a big issue, and so did phase seem to be as well, surprisingly.
I can send you a link to this if you like.

What I found very, very disheartening about these two stories is that it also shows that objectivists can be as stubborn and close-minded as subjectivists :triportsad:.
 
Last edited:
Dec 11, 2019 at 12:01 PM Post #5,368 of 7,175
1. You're of course free to think whatever you want but the actual reality/facts prove that there IS a "final answer". I can understand how/why you could think there isn't though, which is why I'll respond to point 3 before point 2 ...

3. There are many similarities between digital photography and digital audio, as well as many similarities between how we see/perceive images and how we hear/perceive sound and therefore, we can potentially have many valid analogies between the two. However, there are also many differences between the two (some of which are quite profound) and therefore, potentially many analogies between them that are only partially valid and some that are quite profoundly invalid! I believe this is the trap you may have fallen into, which explains your conclusion of there being "no final answer". Unfortunately, I am NOT an authority with digital photography/computer graphics, so my terminology and description of digital imaging may not be entirely correct but I'm going to try to give a couple of examples:

One of the most major differences is what we're actually converting into digital data to start with. With photography our source "format" is light, which is waves/packets (photons) of electromagnetic energy, that we can convert into digital data with sensors. With audio, our source "format" is sound pressure waves, which is mechanical/kinetic energy travelling through a medium BUT, we can only convert electromagnetic energy into digital data with sensors, not mechanical energy, so we CANNOT convert sound into digital data! The solution is simple in theory and didn't need discovering because it already existed, nearly 150 years ago and 50 years before digital audio was first conceived: We first convert this mechanical energy into electromagnetic energy (specifically electricity), a process called transduction, then we can convert this electromagnetic energy to digital data and of course do the reverse conversion and transduction to reproduce the sound waves. However, this has consequences/limitations compared to digital imaging (which doesn't involve transduction) because transduction is highly inefficient (due to the laws of motion/kinetics) and therefore requires relatively massive amounts of amplification, which in turn causes even more limitations (due to other laws of physics, such as thermal noise).

Another major difference is the different response of our eyes and ears, for example: Our ears have a freq response range of about 20 kilo-hz and can resolve that range into about 10,000 different pitches. Our eyes have a freq response range of about 320 tera-hz and can resolve that range into about 10,000,000 different colours. So 16bit, which can represent ~65,000 different colours, is about 150 times fewer colours than the human eye can differentiate but about 6.5 times more pitches than the human ear can differentiate. So, 16bit is definitely "low-res" for the human eye and 24bit, with 16,000,000 colours, is about 1.6 times greater than required for visual "hi-res". However, 16bit for the human ear is already 6.5 times greater than required for audio "hi-res"! This isn't an entirely fair comparison though, because we do not use bits to directly represent the frequencies (pitches/colours) in digital audio but to represent the amplitude of the transduced electrical voltage (from which freq is derived), which is another of the differences between digital audio and digital imaging. So as mentioned above, this would effectively represent a "partially valid" analogy and demonstrates the dangers of digital visual and audio analogies! :)

2. There is no 24bit upscaling in digital audio! Maybe in digital imaging there is, maybe you can interpolate colours between the ~65,000 values available and write those colour values to the 16,000,000 (24bit) available values but digital audio doesn't work that way. If you "upscale" 16bit audio to 24bit nothing changes, there is no "upscaling", you just get 16bit audio in a 24bit container, the extra 8bits (LSBs) are just padded with zeros.

G

I don't believe I've fallen in a trap, as you put it. I don't know why you always assume other people are ignorant of facts, and there's not some clarity in reading what they are trying to write. For example, if you took my post in context, you would see that I talked about DACs which process audio in different manners (using 1bit, 16bit, 24bit sampling). I implied they can all be valid in producing audio that's subjectively good and not limiting with our hearing.

When it comes to photography, there are a few things you're overlooking. I'm not sure why you say audio needs to be converted to digital to begin with: photography has to as well, and it's more complex. The most common digital sensor uses a Bayer filter (an array of 1 red, 1 blue, to 2 green filters covering a photosite). Every color pixel has sub-pixels of photo diodes (a photosite), which similar to a microphone, conducts electricity based on intensity of light (instead of sound). Black and white photography would be more analogous to sound recording, as it doesn't need the use of filters (or with color film, there were three seperate photosensitive layers). Behind the sensor is an analog to digital converter. Over the years, digital cameras have improved in both resolution and dynamic range (with photosites shrinking, and improvements in circuit designs of the ADC). Also, like audio, image files are now considered by dynamic range. In the early days of computer images, computer hardware was pretty limited, and there could be color palates of 4, 16, 32, or 256 colors total. But a millennial wouldn't have any experience with the type of images that couldn't be photo realistic. A standard jpeg has a possibility of 16 million colors....but only 8 stops of light (or 256 shades of gray). That's pretty limiting for exposing a scene that might have a higher dynamic range (for example, a sunny day in which you want to expose both the sky and subjects in the shade). The human eye is thought to be capable of seeing up to 20 stops of light. Current cameras can expose 14 stops of light (or 16,384 shades of light) in one exposure. For computer raytrace rendering, 32bit per channel files are used for more realistic simulations of light (which can get up to 4.29 billion shades of tone). There has been processing (either automatic or manual adjustments) to shrink higher DR images to 8bpc space (which was the only standard for monitors). Now TVs and monitors can get up to 10bpc space, and there is automatic or manual processing to shrink 16bpc, 14bpc, 12bpc files to that as well. Just like mixing influences how the sound is heard....so too how processing (known as tone mapping) is done, can greatly influence how an image looks.
 
Dec 11, 2019 at 1:45 PM Post #5,369 of 7,175
If someone can actually hear things that other humans can't, they shouldn't be doing stunts with magicians or doing it at audio sales events. They should be submitting themselves for scientific testing to determine exactly what they can hear that other people can't. Then perhaps we could figure out how to make equipment audibly transparent for the last .0001%.
 
Dec 12, 2019 at 6:10 AM Post #5,370 of 7,175
[1] I don't know why you always assume other people are ignorant of facts, and
[1a] there's not some clarity in reading what they are trying to write. For example, if you took my post in context, you would see that I talked about DACs which process audio in different manners (using 1bit, 16bit, 24bit sampling). I implied they can all be valid in producing audio that's subjectively good and not limiting with our hearing.
[2] When it comes to photography, there are a few things you're overlooking.
[3] I'm not sure why you say audio needs to be converted to digital to begin with:
[3a] photography has to as well, and it's more complex.
[3b] Every color pixel has sub-pixels of photo diodes (a photosite), which similar to a microphone, conducts electricity based on intensity of light (instead of sound). ... Behind the sensor is an analog to digital converter.
[4] Over the years, digital cameras have improved in both resolution and dynamic range (with photosites shrinking, and improvements in circuit designs of the ADC). ... In the early days of computer images, computer hardware was pretty limited, and there could be color palates of 4, 16, 32, or 256 colors total.
[5] A standard jpeg has a possibility of 16 million colors....but only 8 stops of light (or 256 shades of gray). That's pretty limiting for exposing a scene that might have a higher dynamic range (for example, a sunny day in which you want to expose both the sky and subjects in the shade). The human eye is thought to be capable of seeing up to 20 stops of light. Current cameras can expose 14 stops of light (or 16,384 shades of light) in one exposure.

1. I don't necessarily assume other people are ignorant of the facts! If someone makes some assertion that is incorrect, I assume that EITHER they're ignorant of the facts, that they're not ignorant of the facts but don't really understand them or that they're not ignorant of the facts, do understand them but are erroneously dismissing them for some reason. What rational alternative am I missing?
1a. I didn't state your post didn't have some clarity and I didn't dispute this assertion, only some specific facts within your assertion, your analogy with digital imaging and your conclusion/assertion of there being "no final answer".

2. More than a few I should imagine!

3. But I didn't say that! What I effectively (tried to) say is that to begin with we do NOT even have audio, we have sound pressure waves that needs to be transduced into audio and only then can it be converted to digital. Therefore:
3a. "No", because photography does not have to be transduced as well. Both digital audio and (as I understand it) digital imaging involve an analogue stage and then an ADC but a major difference is that digital imagining involves converting between different types of the same form of energy (light and electricity which are both electromagnetic energy), while digital audio recording involves converting between two different forms of energy (mechanical and electromagnetic).
3b. No, a photodiode is very significantly different to a microphone. As far as I'm aware, photodiodes are microscopic devices with no moving parts (solid state) which operate at the quantum level converting a photon/s into an electron/s. Microphones do NOT conduct electricity based on the intensity of sound, they generate electricity based on the "intensity" of movement of a diaphragm. A microphone therefore has to first convert variations in sound pressure into the mechanical motion of a diaphragm and then convert that mechanical motion of the diaphragm into electricity. So, we're dealing with relatively huge mechanical devices, subject to all the limitations of the laws of physical motion/transduction, all of which results in relatively massive inefficiency (compared to image sensors) in the generated analogue signal, which then of course is the input for conversion to digital data. In the practical application of digital audio, it's this inefficiency which defines system limitations, not the ADC, DAC or number of bits (beyond 16). A somewhat better analogy with an image sensor would have been a tape recorder, which also converts between different types of the same form of energy (electrical and magnetic) but it's still a rather poor analogy as tape recorder performance is still reliant on mechanical forces (physical properties of the tape itself, plus friction, tape alignment, motor/speed, etc.).

4. Another significant difference! Over the years, digital audio has changed significantly but the resultant output resolution and dynamic range has barely changed at all. Even in the earliest days of consumer digital audio (CD) the hardware was capable of 16 bits, near perfect resolution and a dynamic range in excess of both the limitations imposed in practice by microphones or that would be experienced in the real world (at a gig).

5. And another reason to be wary of comparing digital imaging with digital audio, that I've already mentioned. With digital imaging,16 million colours (24bit) still does not cover all the capabilities of the human eye. If, as you say, it only provides 8 stops and the human eye is capable of 20 stops, then it's still a long way from the capabilities of the human eye. For the human ear though, 16bit is already beyond it's capabilities and a long way beyond "comfortable".

G
 

Users who are viewing this thread

Back
Top