Are CD's compressed?
Dec 18, 2011 at 9:22 PM Post #46 of 78
Quote:
I've got a idea that relates to the original post, and perhaps it's incorrect;
 
Every digital file is lossy from the original recording.
 
For a digital format to be lossless, it would have to be -bit and have akhz sampling rate... I'm not sure about the bitrate though, but 2 channels should be enough for our two ears if recorded binaurally.
 
Thoughts?

 
Incorrect. Read up on Nyquist. In order to be lossless up to a certain maximum volume and certain frequency, you need dB/6 bit depth and Hz*2 sampling rate. We only need a dynamic range equal to the dynamic range of the song (96 dB or 16 bit is enough for all known recordings) and a sampling rate that captures all audible frequencies (44.1 kHz does that). And if you want to talk about lossless vs. the original recording, rejoice. Microphones aren't going to capture any frequencies higher than we can reproduce with digital audio, even if you have to go up to 96 or 192 kHz sampling rate, and they definitely won't capture a dynamic range greater than 24 bits.
 
If you want to argue semantics then yes, to have "true" lossless you need infinite bit depth and sampling rate, but on the other hand analog storage methods can't be truly lossless either.
 
Dec 18, 2011 at 9:23 PM Post #47 of 78


Quote:
 
Incorrect. Read up on Nyquist. In order to be lossless up to a certain maximum volume and certain frequency, you need dB/6 bit depth and Hz*2 sampling rate. We only need a dynamic range equal to the dynamic range of the song (96 dB or 16 bit is enough for all known recordings) and a sampling rate that captures all audible frequencies (44.1 kHz does that).
 
If you want to argue semantics then yes, to have "true" lossless you need infinite bit depth and sampling rate, but on the other hand analog storage methods can't be truly lossless either.

16bit/ corrections :)

 
 
 
Dec 18, 2011 at 9:26 PM Post #49 of 78


Quote:
You gotta read my sentence again. dB/6 is how to calculate bit depth needed for a given dynamic range. 96 dB is 96/6 = 16 bits. 144 dB is 144/6 = 24 bits.

:'( i wanted to sound smart for once.
 
geez. i can't match sound knowledge. i can only do computer hardware knowledge :'(
 
 
 
Dec 19, 2011 at 1:37 AM Post #51 of 78
Quote:
 
Incorrect. Read up on Nyquist. In order to be lossless up to a certain maximum volume and certain frequency, you need dB/6 bit depth and Hz*2 sampling rate. We only need a dynamic range equal to the dynamic range of the song (96 dB or 16 bit is enough for all known recordings) and a sampling rate that captures all audible frequencies (44.1 kHz does that). And if you want to talk about lossless vs. the original recording, rejoice. Microphones aren't going to capture any frequencies higher than we can reproduce with digital audio, even if you have to go up to 96 or 192 kHz sampling rate, and they definitely won't capture a dynamic range greater than 24 bits.
 
If you want to argue semantics then yes, to have "true" lossless you need infinite bit depth and sampling rate, but on the other hand analog storage methods can't be truly lossless either.


Unless you can reproduce sound and not tell the difference between recording and reality, then true lossless has not been achieved.
 
There's still many links in the chain that need fixing, I'm skeptical 16/44.1 is good enough for reality-fi.

 
 
 
Dec 19, 2011 at 1:56 AM Post #52 of 78
In response to those who would set the bit depth at 16, it would seem that there is an infinite number of levels of volume in the real world, even if we can't hear them. Also, microphones can capture and computers can show the detail in high frequencies humans can't hear... Since I was talking about theoretical lossless, not being able to hear the difference isn't an counterargument, though I wouldn't ever dream of going to the extremes of bit-depth and sampling rate 'just to be sure,' as I've already listened to things that I would say are indistinguishable to me from real life. (Audeze LCD-2 attached to a Woo amp I believe, listening to a single female vocalist with no accompaniment.)
 
Dec 19, 2011 at 1:58 AM Post #53 of 78


Quote:
Quote:
 
Incorrect. Read up on Nyquist. In order to be lossless up to a certain maximum volume and certain frequency, you need dB/6 bit depth and Hz*2 sampling rate. We only need a dynamic range equal to the dynamic range of the song (96 dB or 16 bit is enough for all known recordings) and a sampling rate that captures all audible frequencies (44.1 kHz does that). And if you want to talk about lossless vs. the original recording, rejoice. Microphones aren't going to capture any frequencies higher than we can reproduce with digital audio, even if you have to go up to 96 or 192 kHz sampling rate, and they definitely won't capture a dynamic range greater than 24 bits.
 
If you want to argue semantics then yes, to have "true" lossless you need infinite bit depth and sampling rate, but on the other hand analog storage methods can't be truly lossless either.


Unless you can reproduce sound and not tell the difference between recording and reality, then true lossless has not been achieved.
 
There's still many links in the chain that need fixing, I'm skeptical 16/44.1 is good enough for reality-fi.


We are indeed quite far from reality-fi but 16/44 .1 is not the culprit. Let's take the example of a violin in a room, the radiation pattern of the violin is anything but spherical, the soundwave reflects on multiple surfaces before arriving to the microphone(s). Once you get the recorded signal on the CD and play it back in you own room, how can you expect your speakers to radiate the sound of the violin in the same specific pattern it did in the original room, how can you expect the reflections happening in your room and the reflections recorded on the disc to magically combine into the reflections created by violin in the room where it was recorded.
 
And what about the acoustical delays of a 50x100m concert room? How can you expect to have the same kind on thing in your own room, or the mastering engineer to adapt/tweak the recording for your specific room?
 
And that's why reality-fi is still far far away.
 
 
 
Dec 19, 2011 at 2:03 AM Post #54 of 78
Reality-fi is lossy too.
 
Think about it. You're in the back of the balcony. Are you hearing everything the guy in the middle row is? Or the guy in the front row? Or the conductor? 
beyersmile.png

 
Dec 19, 2011 at 2:11 AM Post #55 of 78
There you go, I found the guy who posted about why reality-fi is still far away. A very interesting post adressing the issues of the reproduction of life-like sounds a reminder of what the professional side of audio looks like. Head Injury, I have a feeling you'll be liking this
beerchug.gif

 
Quote:
The fidelity to the original performance it totally out of reach of current technology. This is brilliantly demonstrated in Floyd Tool's book "Sound Reproduction", figure 3.3 page 36, with the directionality of a violin at different frequencies.
 
From 200 to 400 Hz, a violin is omni-directional. You hear the direct sound, plus the sounds reflected on the lateral walls, the wall behind the performer, the floor and the ceiling. At 425 Hz, however, the violin doesn't emit in the back-down direction. Reflection on the back wall is lower, and the secondary reflection that bounces on the floor, back wall, then ceiling is severely attenuated. At 500 Hz, however, that's the dominant direction of emission.
And the directionality changes drastically many times given the frequency range. No speaker can reproduce the same soundfield with the same directions of emission for each frequency.
And that's for violin only. Other instruments are completely different, and emit different amounts of energy towards the walls, floor and ceiling.
 
A practical consequence : violins used to be recorded with microphones situated above and a bit in front of the orchestra. In this direction, violins emit a lot of energy in the 2500 - 5000 Hz range, that is not at all emitted in direction of the audience. Therefore the recorded sound was very different from the sound emitted in the direction of the audience. Recording engineers knew that in such recordings, it was better to attenuate treble. It could be thought to be a modification of the original sound, but it was not. On the contrary, this helped to artificially remake a violin sound that sounds like the one that is perceived from the audience.
 
So what if we record directly from the listener's position ? This way, we capture exactly what should be heard by the listener. The problem is that the original acoustic adds up with the acoustic of the reproduction room in a way that is completely unbearable.
 
Therefore, recording music is an art of recreating a soundstage, given an average listening room with an average two-channel setup, that is necessarily very far from the original, but still enjoyable. For example, the reflections on the wall that is behind you can't be recorded and reproduced with a two-channels system. They are replaced by new reflections created in the listening room. Which means that it's better eliminating the original ones so that they won't add up with the ones in your own room, coming from the front.
 
Try to record your own hifi with a stereo microphone from your favorite listening position, and play the recording back in the hifi. No, the microphone is not crappy, that's your room that sounds that way ! Make another recording with the left and right microphones just in front of the speakers to check. This experiment was one of the biggest surprises of my audiophile life : I had the microphone in hand, closed headphones on the head, and was moving the microphone from the speaker to the listening position back and forth, and I didn't understand what was happening : why did the sound change so drastically from the microphone point of view, while it didn't if I did the same thing with my own ears ?
The answer was that the brain is extremely good at eliminating the tonal balance of the room from the listening experience.
 
All these things make us reconsider the original question about fidelity to the original performance. Most of this fidelity is actually in the hands of the recording and mixing engineers, that have no other choice than to recreate an artificial soundstage and an artificial tonal balance that simulates a good listening experience, given that it is going to be used on a two-channels system in an average room.
 
So we are left with fidelity to the recording instead of fidelity to the live performance. If we can define fidelity for a speaker, it is not possible for a room. In low frequencies, rooms have very strong resonances that amplify some frequencies and not others. Even anechoic rooms are not very anechoic in low frequencies. And anyway, stereo recordings, as made in studio, are not suited at all for listening in anechoic rooms. They have not enough reverberation. Making a room that is neutral in low frequencies in very difficult. Some advise the use of as many subwoofers as possible, scattered in strategic positions, so that they don't act on the same resonant frequencies in the room.
 
For speakers, the basics of good quality are quite understood : they must have a flat frequency response in the axis, and a smooth frequency response outside the axis. How must attenuation must they have outside the axis ? I am not sure that there is any standard about this.
Also, in France, a story goes about Cabasse loudspeakers. Some models were claimed to have an excellent frequency response, but were not appreciated by audiophiles. The reason was that they were only good at realistic listening levels. But since home listening is usually performed at lower levels, these speakers seemed to lack bass and treble, because the human ear has not the same frequency response at different levels. This is easy to see on Fletcher-Mundson curves. Thus, a coloured speaker allows a listening experience that is closer to the original than a transparent speaker at domestic listening levels.
 
All these parameters makes the question of fidelity a very complex one.



 
 
 
Dec 19, 2011 at 2:42 AM Post #57 of 78
 
Room acoustics are one of the culprits, the quality and technology of the microphones, microphone placement, etc... but so is the recording and playback method, you want that to be as transparent as possible, that's where DXD and DSD come in.
 
No one cares because 1. The files are so ridiculously huge, 2. Marketing (10 years ago, you needed very expensive equipment to play a DSD file, etc.), 3. CD and FLAC collections "this is already the best quality possible" =P
 
 
Maybe the future of speakers will be robots playing instruments for us? Then that would be reality-fi. (I'm being serious)
 
Dec 19, 2011 at 4:55 AM Post #58 of 78
Cybernetic robots with organic vocal cords grown from human cells?
 
Dec 19, 2011 at 6:12 AM Post #60 of 78
You still need a vocal cord for the robot to sing in songs. Buying all the instruments will be more expensive than anything though, and they need to be high quality like the ones used for professional recordings. One robot is not anywhere close to enough; you'd need a few dozens for certain music.
 

Users who are viewing this thread

Back
Top