Laurel vs Yanny : VOTE POLL & Someone please explain!

What do you hear???


  • Total voters
    74
  • Poll closed .
May 29, 2018 at 3:25 PM Post #121 of 145
Jane Doe commuting to work on the subway isn't going to ask a record label to "please compress my favorite Taylor Swift album a little more so I can hear it over the train noise". So no, consumers did not demand squashed, loudened up dogsh&-, recording artists and producers did!

The words "record company", "focus groups" and "R&D" are missing from your list.

It started with "wall of sound" and took off from there. Yea, there was an artist demanding to be louder than the others...
 
May 29, 2018 at 3:29 PM Post #122 of 145
It could also be you, the master engineer.

You: "Done. Listen."
Corporate: "It's really great. Beautiful. Needs to be louder."
You: "But, but..."
Co: "You can be artistic all day on your own, ya know?"
You: "makes it louder"
 
May 29, 2018 at 4:52 PM Post #125 of 145
[1] So educate me: WHY do examples of more recent pop "require" even more compression?
[2] Is it for any of the reasons I stated: What consumers play it back on? To compete with the loudness of other artists? To get the mix to gel together? I've got plenty of examples of stuff that gels quite nicely at DR11-14, let alone DR6 and less.

Your two main statements are contradictory. Your point #2 indicates that you're perfectly happy with your ignorance and don't really what to know "WHY"! In which case I'd just be wasting my time providing an explanation. So, do you want to drop the ignorance and really know "why" or not?

G
 
May 29, 2018 at 7:09 PM Post #127 of 145
Your two main statements are contradictory. Your point #2 indicates that you're perfectly happy with your ignorance and don't really what to know "WHY"! In which case I'd just be wasting my time providing an explanation. So, do you want to drop the ignorance and really know "why" or not?

G

You have the floor, go ahead.
 
May 29, 2018 at 11:03 PM Post #128 of 145
Guys, please stick to the topic of sound perception and leave the mastering talk for another thread.

I really liked where the thread was going with sound perception and people's inputs on how we hear. Lets stick with the Laurel/Yanni thought
this.:arrow_up:
and if you guys want to keep going another round in a more relevant thread, you still need to relax and avoid attacking each others. when saying that sound engineers are doing crap, of course some sound engineers are going to feel insulted. not that it's an excuse to call someone a fool.


about laurel/yanny, what I like is that it's not the more typical case of suggestion like with misheard lyrics, where you can "hear" a sentence after someone tells it to you. in this case knowing that some hear yanny is not enough for me to hear it. so even if it's a pattern thing, I'm guessing it's a more fundamental one, more ingrained in our way of interpreting sounds. or maybe some more physiological stuff? I've seen a few suggestions about age, is there anything statistical somewhere to support that hypothesis?
 
May 29, 2018 at 11:29 PM Post #129 of 145
about laurel/yanny, what I like is that it's not the more typical case of suggestion like with misheard lyrics, where you can "hear" a sentence after someone tells it to you. in this case knowing that some hear yanny is not enough for me to hear it. so even if it's a pattern thing, I'm guessing it's a more fundamental one, more ingrained in our way of interpreting sounds. or maybe some more physiological stuff? I've seen a few suggestions about age, is there anything statistical somewhere to support that hypothesis?

In my case, impossible for me to hear it as Yanny on any system unless I alter the signal. So far, equally consistently Yanny for my wife. Based on things I've seen here and there, doesn't seem to be much correlation with age. Seems to have a lot to do with which frequencies are heard more than others:
 
May 29, 2018 at 11:32 PM Post #130 of 145
What is most interesting to me is the outlier this test is as far as A/B testing and what that really says about our perceptions and reality.

If only they could translate this to maybe a cleaner sound bite and perhaps instruments? Which set of notes do you hear music man? Well the melody is this but if I'm in a certain mood the melody is this.

My take on this may be people who listen for predatorial animals vs people who are more care givers... People who hear Laurel listen for deeper sounds - big lion close by, DANGER! While care givers hear; is that a baby crying in the distance??
 
Last edited:
May 29, 2018 at 11:57 PM Post #131 of 145
What is most interesting to me is the outlier this test is as far as A/B testing and what that really says about our perceptions and reality.

If only they could translate this to maybe a cleaner sound bite and perhaps instruments? Which set of notes do you hear music man? Well the melody is this but if I'm in a certain mood the melody is this.

My take on this may be people who listen for predatorial animals vs people who are more care givers... People who hear Laurel listen for deeper sounds - big lion close by. While care givers hear is that a baby crying in the distance??
well there is a clear reason why we separate objective stuff and subjective observation of reality. we can never be totally sure that our objective result is the definitive model of real stuff, but we're absolutely sure that our subjective impression is not, as it's an interpretation of specific and limited senses. it's easier for us to think of it as real world because we know almost nothing else, but the fact remains.

reading your idea about predators, I was thinking that maybe it's a social thing, like if the person we fear the most has a low voice, then we get laurel? lol, that would be a fun correlation. something like developing a skill to better notice or maybe better ignore the other half of the couple? :deadhorse:. I have no idea why that would be so, but imagine if we start spreading that idea around on the web :imp:. how many people would get punched in the face? I'm so evil.

we can pretty much rule out culture in a wide sense, stuff like native language. and also rule out playback gears. all thanks to the differences found inside members of a same family. it's tricky. maybe someone with a budget for useless stuff will look up genetic correlation someday.
 
May 30, 2018 at 12:41 AM Post #132 of 145
The test that would interest me would be to have a large corpus of recordings of people saying 'laurel', and see how many of those recordings ever illicit 'yanni' and when they do it. My gut is that we got the zebra case through happenstance of the intrawebs and everyone is making too much of it.
 
May 30, 2018 at 1:09 AM Post #133 of 145
The test that would interest me would be to have a large corpus of recordings of people saying 'laurel', and see how many of those recordings ever illicit 'yanni' and when they do it. My gut is that we got the zebra case through happenstance of the intrawebs and everyone is making too much of it.
I'm convinced it's not an issue of understanding human speech, but definitely something born from crap recording and accidental extra sounds. I don't think anybody imagined that sometimes when you tell laurel, someone else will understand yanny. that's not likely to be a thing at all.
 
May 30, 2018 at 6:55 AM Post #134 of 145
Guys, please stick to the topic of sound perception and leave the mastering talk for another thread.

Well actually the "mastering talk" and the topic of sound perception are to a large extent the same thing, even though we haven't got that far yet. But as the mod has spoken ...

You have the floor, go ahead.

OK, if you really are serious then start a new thread (or resurrect a relevant one) and I'll explain.

What is most interesting to me is the outlier this test is as far as A/B testing and what that really says about our perceptions and reality.
[1] If only they could translate this to maybe a cleaner sound bite and perhaps instruments?
[1a] Which set of notes do you hear music man?
[1b] Well the melody is this but if I'm in a certain mood the melody is this.
[2] My take on this may be people who listen for predatorial animals vs people who are more care givers... People who hear Laurel listen for deeper sounds - big lion close by, DANGER! While care givers hear; is that a baby crying in the distance??

1. They/We do translate this into instruments/music and have done for many centuries, it's a fundamental feature of what differentiates music from sound or noise in the first place! ...
1a. This is a fundamental question as far as music is concerned but to address it requires the asking of a few even more fundamental questions. Firstly, what is a "note"? This turns out to be a much more complicated question than it appears but to keep it simple for now, a "note" can be described as a set of frequencies (a fundamental and a number of harmonics/overtones) which we recognise as a musical note. Secondly, what is a "melody"? Again, another apparently simple question which in reality is complex but again to keep it simple, we can describe a "melody" as a set of individual notes played in sequence, where the pitch relationships between each of the notes enables the listener to perceive/recognise a tune or melody. Thirdly, there is the issue of a set of different notes played simultaneously and again, pitch relationships between those notes can cause that set of notes to be perceived/recognised as a single entity, which we call a "chord". And lastly, we can have a set of chords played in sequence/progression, which can cause the recognition/perception of what we call "harmony". Along with rhythm, these 3 elements constitute the fundamental building blocks of what we call "music" but you'll notice that ALL of these building blocks relies entirely on "recognition", on "patterns" generated/interpreted by our perception. Music itself is therefore just a perception, it doesn't really exist! This is why there is no comprehensive definition of the term "music" even though to most of us the difference between music and noise or sound is obvious. Furthermore, it's been demonstrated that while creating patterns/perception is an innate ability, creating the patterns/perception which define "music" is not, it's a learned response.
1b. Historically and generally, this principle is employed inversely to how you describe. Instead of the melody changing according to your mood, your mood is changed according to the melody (or rather, according to all the fundamental building blocks of music, including melody). However, all these basic building blocks are inter-related, how you perceive a melody is dependant on the other building blocks, for example the harmony. Essentially, "the melody is this" (or rather, is perceived as "this") with one harmony but with a different harmony "the melody is this" (perceived differently). This is in fact a basic compositional tool which has been extensively employed and explored starting around 600 years ago. More directly addressing your point though; there's another common compositional tool, called a "counter-melody", which is essentially a second melody which is subordinate to the main melody. However, depending on how and where this counter-melody is used, depends on whether you perceive it as either: The second melody, the main melody or indeed, not as a melody at all but instead as just a texture or harmony accompanying the main melody. There is an entire branch of music composition purely dedicated to this, called "Counterpoint". Which started being developed around 500 years ago (during the Renaissance) but took nearly a century of development to reach it's peak of sophistication (in what is called the High or Late Baroque period). The greatest master of counterpoint was JS Bach, who at times employed up to 4 different simultaneous melodies, and which one (or ones) we perceive as the actual melody at any particular point in time forms the very basis of the piece of music in the first place! With modern achievements in science and technology and our rapidly changing world, we tend to assume that the depth and sophistication of knowledge say 400 years ago was very simple/primitive compared to today and in the vast majority of cases that assumption is of course entirely correct but it's not entirely correct as far as music composition and perception is concerned. In some/many respects, the situation with music is actually the exact opposite, the vast majority of music created today is very unsophisticated, simple and primitive compared to most of the (surviving) western music created 400 years ago!

2. I'm not sure I agree with that. For example, yes a lion can produce relatively low freq sounds (growls for instance) but when hunting they don't growl, they stalk quietly and the first sound of a "big lion close by" is just as likely, if not more likely, to be relatively high pitched, say a small twig snapping or leaves/grass rustling, etc. We are all predisposed to expect the lowest (fundamental) frequency of any particular sound to be dominant, it's a "pattern" which is so expected that the brain/perception will simply invent that frequency when it's missing. However, under conditions where the number of harmonics is restricted this tendency to invent the "missing fundamental" becomes variable, some people will perceive/hear the missing fundamental and some will primarily hear the overtones instead. It's entirely possible that this principle of the "missing fundamental" is playing a part in what people are perceiving and/or some physiological difference, for example the resonant frequency of an individual's ear canal coinciding (or not) with an important freq/harmonic in the Yanni "pattern", causing their brain to latch on to the Yanni pattern in preference to the Laurel pattern.

we can pretty much rule out culture in a wide sense, stuff like native language. and also rule out playback gears. all thanks to the differences found inside members of a same family.

I don't believe we can "rule out playback gears". If a system is incapable of reproducing lower freqs then we're more likely to hear Yanni, that's why the NYT tool works and most or all of us can hear Yanni using the slider, even if we only ever heard Laurel previously (and vice versa). If a system is bass heavy (or mid/treble light) then we're more likely to hear Laurel. If the playback system reproduces both lower freqs and higher freqs moderately well, only then does it become less of a variable IMHO. I believe there are 3 basic variables at play here: The balance of freqs reaching our ears (which is playback system/environment dependant), the response of our ears to that balance of freqs (if we still have good HF response for example) and the pattern matching mechanism of our brain/perception, which pattern we latch on to at any particular moment in time.

G
 
May 30, 2018 at 7:14 AM Post #135 of 145
Playback gear does seem to matter when a given individual is near a tipping between hearing Yanny vs Laurel. But people have different tipping points, and the tipping point can shift around over time for a given person, so we have two dimensions of variability in how brains process acoustic signals.
 

Users who are viewing this thread

Back
Top