Head-Fi.org › Forums › Equipment Forums › Sound Science › How is timbre in sound represented electronically or digitally?
New Posts  All Forums:Forum Nav:

How is timbre in sound represented electronically or digitally? - Page 3

post #31 of 49



Agreed 110%. Todays recordings IMO are garbage for the most part. You really have to search for something decent out there these days.IMO studios could care less about SQ, we are the minority in what we expect from music today.

Quote:
Originally Posted by 9pintube View Post

Quote:


Agree 100% Guidostrunk!  It reminds me of "Quadraphonic" sound in the 70's and later the so called "5.1/7.1 Surround Sound" of Today....FAKE, FAKE, FAKE. also JMO.......Give me a natural 2 ch. stereo mix from a tube/analog board recording any day, hell I'll take a real mono mix/recording over the new "Whistles and bells of recordings today........JMO, again.....
 

post #32 of 49
Thread Starter 


Wow, I had never thought that this thread would take off like this! I learned lots from your discussion guys! Thanks!

 

Quote:

Originally Posted by 9pintube View Post

PS---SOUND IS SOUND, it does not matter if it's  represented electronically or digitally?
 


@9pintube: Sorry, I think I more technically meant how audio signal w/ timbre is represented electronically or digitally; thus how voltage most often is used electronically, and via most binary representations with different modulation techniques like PCM when represented digitally - or discrete. I think I've formed a satisfying knowledge about that now, through this thread and some reading in textbooks!:-)

 

Thanks again!

post #33 of 49
Quote:
Originally Posted by Anaxilus View Post
Then what measure would relate to the quality of a notes body, fullness or roundness?  That can't be FR sice I could have w phones w/ flat FR and one could sound thin or dry and the other lush and wet.


As I said very early in this thread, Timbre is the content and relation of various frequencies. So it is indeed basically frequency response, though it can include resonances. There can also be time-based components, as Nick's Post #28 shows. But at heart timbre is purely frequency response. So two headphones with the same response will reproduce the same timbre for a given source. However, other things can differ between headphones (and speakers) including distortion and ringing. But that's outside the definition of timbre.

 

--Ethan

post #34 of 49

Quote:

Originally Posted by EthanWiner View Post


As I said very early in this thread, Timbre is the content and relation of various frequencies. So it is indeed basically frequency response, though it can include resonances.  

--Ethan

Ethan, Where does it say that TIMBRE is basically a frequency response....?????..Just Wondering........................

 

post #35 of 49

> Ethan, Where does it say that TIMBRE is basically a frequency response....?????..Just Wondering........................

 

.................... sorry, I'm not Ethan ... dot dot dot 

 

 

A tone can be described by the following parameters:

- the tone pitch or fundamental tone (frequency in Hz)

- level of sound (amplitude, SPL or dynamics like piano, forte etc.)

- timbre or Klangfarbe (acoustically approximated with partial tones or overtones that resonate with the fundamental tone)

- length (in seconds or note value)

 

(translated from german wikipedia article "Ton", also see http://en.wikipedia.org/wiki/Timbre#Harmonics)

 

"When the orchestral tuning note is played, the sound is a combination of 440 Hz, 880 Hz, 1320 Hz, 1760 Hz and so on. The balance of the amplitudes of the different frequencies is a major factor in the characteristic sound of each instrument."

 

 

from cnx.org:

"Timbre is caused by the fact that each note from a musical instrument is a complex wave containing more than one frequency. For instruments that produce notes with a clear and specific pitch, the frequencies involved are part of a harmonic series. For other instruments (such as drums), the sound wave may have an even greater variety of frequencies. We hear each mixture of frequencies not as separate sounds, but as the color of the sound. Small differences in the balance of the frequencies - how many you can hear, their relationship to the fundamental pitch, and how loud they are compared to each other - create the many different musical colors.

The harmonics at the beginning of each note - the attack - are especially important for timbre, so it is actually easier to identify instruments that are playing short notes with strong articulations than it is to identify instruments playing long, smooth notes."

 

Emphasis by me.

=> A different frequency response can affect the timbre greatly and is probably the main factor of influence.

=> The idea of using an EQ (provided that the speaker or headphone doesn't distort too much) to affect the timbre is not so bad it seems. wink.gif

 

And some pics and sound files:

pure sine:

A1Schwingung.gif

 

with crazy overtones:

Mitobertoenen.gif

Now imagine your headphone had a dip in the frequency response at the frequency of the 5th harmonic (4th overtone)... 


Edited by xnor - 11/5/10 at 10:57am
post #36 of 49
Quote:
Originally Posted by xnor View Post


The topic gets under your skin, yet you were the one who asked?

 

@Guidostrunk: The answer is obvious. There's more to it than just frequency response.

 

^ Way to chicken out w/ that answer.

 

 

I never brought up EQ.  YOU DID.  You can stop with the straw men now.  I think my experiences and opinions on the matter are being represented well enough.  Enough not to have to engage further non-topical commentary.  
 

post #37 of 49

@ nick_charles

 

Thank You!

post #38 of 49
Quote:
Originally Posted by Anaxilus View Post

I never brought up EQ.  YOU DID.  You can stop with the straw men now.  I think my experiences and opinions on the matter are being represented well enough.  Enough not to have to engage further non-topical commentary.  


Your original question is:
> Then what measure would relate to the quality of a notes body, fullness or roundness?

> That can't be FR sice I could have w phones w/ flat FR and one could sound thin or dry and the other lush and wet.

 

Here you assume that FR, and therefore adjusting the FR with an EQ, doesn't relate to the quality of a note that is reproduced by a headphone.

 

I mentioned EQ in a non-serious way, and offered to look at a sound you pick and try to find why and how two different headphones reproduce it (regarding timbre or qualities you described).

You then answered that the topic gets under your skin, which I don't really get..

 

The only straw man argument I can see here is that you (and others) insist that I think that EQing makes headphones and even speakers identical. !?

In fact, you are evading your own questions, which I don't really get either..


Edited by xnor - 11/6/10 at 7:57am
post #39 of 49

Hi

 

Slightly off topic, but still pertinent to the discussion. I found the following in the FAQ on Beurdynamic's website:

 

http://europe.beyerdynamic.com/service/faqs/kopfhoerer.html

 

What is diffuse-field equalisation?

Have you ever wondered why a frequency response curve is almost never included with headphones? I can let you in on the secret: they look terrible! Such an erratic frequency response graph would hardly encourage customers to make a purchase. What the customer wants in the end is something that is linear. Uncoloured. Solid.

But why do these frequency response curves look so horrible? And why do you not clearly hear these glaring leaps and drop-offs?

 

How we hear

From childhood on, humans are accustomed to perceiving acoustic events. We grow up with a variety of sound sources and get used to them. The baby rattle, the clatter of dishes from the kitchen, pedestrians on the street, music from loudspeakers, etc. – all of these sound sources have something in common: they are located relatively far from the ear.

Before the sound from these sources reaches our eardrum, it is coloured by the shape of our head and our ear. Depending on the angle, many frequencies are accentuated and others are attenuated. With time, we learn these frequency patterns and are able to do things such as recognise the direction in which the sound source is located. Therefore, we do not hear sound as it was produced at the source, but instead in coloured form.

 

Loudspeakers and headphones

When we listen to music over loudspeakers with a linear frequency response curve, we are actually hearing a spectrum that is influenced by the distinctive shape of our head. We perceive this as linear.

When listening with headphones, the headphones do not even try to generate any effects on the outer ear, since the sound source is so close to the ear. What comes out of the headphones arrives at the eardrum in relatively uncoloured form. In order for the headphones to still sound natural, the sound must be coloured so that it is as similar as possible to the colourations caused by the shape of the head and ear. In other words, the headphones must have the frequency response set so that it sounds like the sound is coming from a distant source.

 

Diffuse-field equalisation

In order to adjust headphones to our listening habits, we must first use technical means to measure the colourations caused by our head. For example, an artificial head with microphones in the ears is used. When this artificial head is exposed to sound, you can use the microphones to measure how the sound would be perceived by us instead of the artificial head.

So that the headphones do not have a sound that always seems to come from one direction, but instead can reproduce all sound directions equally, the artificial head must be exposed to sound from many directions and the result averaged. This does not perfectly reproduce any direction perfectly, but no direction is completely suppressed. 

At beyerdynamic, there is an echo chamber for this purpose. It is a small, five-sided room with acoustic sails on the ceiling that looks quite bare and empty. The fascinating thing about it is that, although it is the size of child’s room, it sounds like a cathedral! An octahedron loudspeaker that radiates sound in eight directions is in one corner. If you are far enough away from the loudspeaker, the strong echo causes you to no longer be in the direct field, but instead in the diffuse field of the loudspeaker, i.e. the area in which the sound reflected off the walls is louder than the sound that is coming directly from the loudspeaker.

If artificial head measurements are carried out in this chamber, many sound directions overlap due to the echo, allowing us to obtain the required averaging. This averaging (the measurement in the diffuse field) gives diffuse field equalisation its name.

In order to equalise the headphones, they are placed on the artificial head and the frequency response is adjusted so that the measured frequency behaviour corresponds to that of the diffuse field.

 

Discussion

Since the mechanical and electronic options for changing the frequency response of headphones are limited, the equalisation cannot be carried out perfectly. Different headphones are also adjusted to various tastes. It is by no means the case that all diffuse-field equalised headphones sound the same. In addition, the frequency patterns for directional hearing depend on the shape of the head and ears. For this reason, they are a little different for everyone. Hence, measuring with an artificial head is a pretty arbitrary choice.

Diffuse-field equalisation is therefore an important part of improving localisation with headphones and avoiding “in-head localisation”, but it is not guaranteed to work and is no replacement for extensive test listening.

 

Regards

 

Neels

post #40 of 49
Quote:
Originally Posted by xnor View Post


Your original question is:
> Then what measure would relate to the quality of a notes body, fullness or roundness?

> That can't be FR sice I could have w phones w/ flat FR and one could sound thin or dry and the other lush and wet.

 

Here you assume that FR, and therefore adjusting the FR with an EQ, doesn't relate to the quality of a note that is reproduced by a headphone.

 

I mentioned EQ in a non-serious way, and offered to look at a sound you pick and try to find why and how two different headphones reproduce it (regarding timbre or qualities you described).

You then answered that the topic gets under your skin, which I don't really get..

 

The only straw man argument I can see here is that you (and others) insist that I think that EQing makes headphones and even speakers identical. !?

In fact, you are evading your own questions, which I don't really get either..


Maybe going back and rereading might clarify that not every comment revolves around you.  I'm sure somehow in your little cerebral enclosure, construing my jovial comment about PTED to be an argument on the topic at hand rather than an explanation of my reaction to your vague commentary does not constitute a straw man in your parallel universe.  How about we drop it since I'm obviously speaking Farsi to you.  

 

I also 'assume' that adjusting FR doesn't because I can use anything from my Clip+ to my 30 band Audiocontrol and KNOW that it doesn't from experience.  As to your hypothetical test of a sound file of my choosing.  What headphones do you propose to compare?  Wouldn't you have to pick phones I'm familiar with?  Otherwise the test would be reliant on your Fletcher Munsons and trust in you of which I have very little of at this point.  How about you explain the parameters of your groundbreaking test rather than ask me for a sound file which you can't seem to provide on your own.    

 


Edited by Anaxilus - 11/6/10 at 9:01am
post #41 of 49
Quote:
Originally Posted by Chiron View Post

Hi

 

Slightly off topic, but still pertinent to the discussion. I found the following in the FAQ on Beurdynamic's website:

 

http://europe.beyerdynamic.com/service/faqs/kopfhoerer.html

 

What is diffuse-field equalisation?

Have you ever wondered why a frequency response curve is almost never included with headphones? I can let you in on the secret: they look terrible! Such an erratic frequency response graph would hardly encourage customers to make a purchase. What the customer wants in the end is something that is linear. Uncoloured. Solid.

But why do these frequency response curves look so horrible? And why do you not clearly hear these glaring leaps and drop-offs?

 

How we hear

From childhood on, humans are accustomed to perceiving acoustic events. We grow up with a variety of sound sources and get used to them. The baby rattle, the clatter of dishes from the kitchen, pedestrians on the street, music from loudspeakers, etc. – all of these sound sources have something in common: they are located relatively far from the ear.

Before the sound from these sources reaches our eardrum, it is coloured by the shape of our head and our ear. Depending on the angle, many frequencies are accentuated and others are attenuated. With time, we learn these frequency patterns and are able to do things such as recognise the direction in which the sound source is located. Therefore, we do not hear sound as it was produced at the source, but instead in coloured form.

 

Loudspeakers and headphones

When we listen to music over loudspeakers with a linear frequency response curve, we are actually hearing a spectrum that is influenced by the distinctive shape of our head. We perceive this as linear.

When listening with headphones, the headphones do not even try to generate any effects on the outer ear, since the sound source is so close to the ear. What comes out of the headphones arrives at the eardrum in relatively uncoloured form. In order for the headphones to still sound natural, the sound must be coloured so that it is as similar as possible to the colourations caused by the shape of the head and ear. In other words, the headphones must have the frequency response set so that it sounds like the sound is coming from a distant source.

 

Diffuse-field equalisation

In order to adjust headphones to our listening habits, we must first use technical means to measure the colourations caused by our head. For example, an artificial head with microphones in the ears is used. When this artificial head is exposed to sound, you can use the microphones to measure how the sound would be perceived by us instead of the artificial head.

So that the headphones do not have a sound that always seems to come from one direction, but instead can reproduce all sound directions equally, the artificial head must be exposed to sound from many directions and the result averaged. This does not perfectly reproduce any direction perfectly, but no direction is completely suppressed. 

At beyerdynamic, there is an echo chamber for this purpose. It is a small, five-sided room with acoustic sails on the ceiling that looks quite bare and empty. The fascinating thing about it is that, although it is the size of child’s room, it sounds like a cathedral! An octahedron loudspeaker that radiates sound in eight directions is in one corner. If you are far enough away from the loudspeaker, the strong echo causes you to no longer be in the direct field, but instead in the diffuse field of the loudspeaker, i.e. the area in which the sound reflected off the walls is louder than the sound that is coming directly from the loudspeaker.

If artificial head measurements are carried out in this chamber, many sound directions overlap due to the echo, allowing us to obtain the required averaging. This averaging (the measurement in the diffuse field) gives diffuse field equalisation its name.

In order to equalise the headphones, they are placed on the artificial head and the frequency response is adjusted so that the measured frequency behaviour corresponds to that of the diffuse field.

 

Discussion

Since the mechanical and electronic options for changing the frequency response of headphones are limited, the equalisation cannot be carried out perfectly. Different headphones are also adjusted to various tastes. It is by no means the case that all diffuse-field equalised headphones sound the same. In addition, the frequency patterns for directional hearing depend on the shape of the head and ears. For this reason, they are a little different for everyone. Hence, measuring with an artificial head is a pretty arbitrary choice.

Diffuse-field equalisation is therefore an important part of improving localisation with headphones and avoiding “in-head localisation”, but it is not guaranteed to work and is no replacement for extensive test listening.

 

Regards

 

Neels


Great post Neels!  I'm not a professional audio engineer but my ears and experience have convinced me that coloration is necessary in IEMs and headphones to achieve that natural and true sound.  Invariably that always gets dumped on by 'experts' and other 'neutrality' Nazis.  I think many are just freaked out by the thought of the ground being pulled out from under their feet and upsetting their dogmatic perception of the universe.  I have a few articles and posts from research that reinforces your point.  I'll be adding you comment to my archive.  Thanks.  Ditto Mr. Charles.

post #42 of 49
Quote:
Originally Posted by Anaxilus View Post

Maybe going back and rereading might clarify that not every comment revolves around you.  I'm sure somehow in your little cerebral enclosure, construing my jovial comment about PTED to be an argument on the topic at hand rather than an explanation of my reaction to your vague commentary does not constitute a straw man in your parallel universe.  How about we drop it since I'm obviously speaking Farsi to you.  

 

I also 'assume' that adjusting FR doesn't because I can use anything from my Clip+ to my 30 band Audiocontrol and KNOW that it doesn't from experience.


Wow you really are twisting things.

 

I asked you nicely to find a tone that sounds lush, wet ... with two headphones, as you mentioned in your original question, to analyze it.

 

Then you said this thread has run it's course and that the topic gets under your skin (which you edited/removed today, why?), and I'm just asking myself, what is going on?

 

And the craziest part, now you say I'm falsely construing your comments to be an argument on the topic at hand (adjusting FR does not affect timbre) and in the next paragraph you argue just that?!?

In fact, you "KNOW that it doesn't from experience".

While it clearly has to be affected based on the very definition of timbre. wtf?

 

Let's rather not make assumptions about somebody else's cerebral enclosure size... wink.gif


Edited by xnor - 11/6/10 at 11:19am
post #43 of 49

post #44 of 49
Quote:
Originally Posted by 9pintube View Post
Ethan, Where does it say that TIMBRE is basically a frequency response....?????


As I wrote early in this thread:

 

Quote:
Originally Posted by EthanWiner View Post
Timbre is basically frequency response or, more accurately, the balance of several frequencies to each other. In a musical instrument there are usually also resonances that define its timbre. A resonance is not just a frequency peak, but the peak is usually narrow and has a time-based property that decays over time even after the source ceases. A cello has many such resonant peaks, and the number and strength and distribution of those peaks is what makes every cello sound different from every other cello.
post #45 of 49
Quote:
Originally Posted by xnor View Post

 

(which you edited/removed today, why?)

 

The only thing edited was a premature apology.  Just let it go.  Breathe.....


Edited by Anaxilus - 11/7/10 at 12:16pm
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Sound Science
Head-Fi.org › Forums › Equipment Forums › Sound Science › How is timbre in sound represented electronically or digitally?