Guys, please stick to the topic of sound perception and leave the mastering talk for another thread.
Well actually the "mastering talk" and the topic of sound perception are to a large extent the same thing, even though we haven't got that far yet. But as the mod has spoken ...
You have the floor, go ahead.
OK, if you really are serious then start a new thread (or resurrect a relevant one) and I'll explain.
What is most interesting to me is the outlier this test is as far as A/B testing and what that really says about our perceptions and reality.
[1] If only they could translate this to maybe a cleaner sound bite and perhaps instruments?
[1a] Which set of notes do you hear music man?
[1b] Well the melody is this but if I'm in a certain mood the melody is this.
[2] My take on this may be people who listen for predatorial animals vs people who are more care givers... People who hear Laurel listen for deeper sounds - big lion close by, DANGER! While care givers hear; is that a baby crying in the distance??
1. They/We do translate this into instruments/music and have done for many centuries, it's a fundamental feature of what differentiates music from sound or noise in the first place! ...
1a. This is a fundamental question as far as music is concerned but to address it requires the asking of a few even more fundamental questions. Firstly, what is a "note"? This turns out to be a much more complicated question than it appears but to keep it simple for now, a "note" can be described as a set of frequencies (a fundamental and a number of harmonics/overtones) which we recognise as a musical note. Secondly, what is a "melody"? Again, another apparently simple question which in reality is complex but again to keep it simple, we can describe a "melody" as a set of individual notes played in sequence, where the pitch relationships between each of the notes enables the listener to perceive/recognise a tune or melody. Thirdly, there is the issue of a set of different notes played simultaneously and again, pitch relationships between those notes can cause that set of notes to be perceived/recognised as a single entity, which we call a "chord". And lastly, we can have a set of chords played in sequence/progression, which can cause the recognition/perception of what we call "harmony". Along with rhythm, these 3 elements constitute the fundamental building blocks of what we call "music" but you'll notice that ALL of these building blocks relies entirely on "recognition", on "patterns" generated/interpreted by our perception. Music itself is therefore just a perception, it doesn't really exist! This is why there is no comprehensive definition of the term "music" even though to most of us the difference between music and noise or sound is obvious. Furthermore, it's been demonstrated that while creating patterns/perception is an innate ability, creating the patterns/perception which define "music" is not, it's a learned response.
1b. Historically and generally, this principle is employed inversely to how you describe. Instead of the melody changing according to your mood, your mood is changed according to the melody (or rather, according to all the fundamental building blocks of music, including melody). However, all these basic building blocks are inter-related, how you perceive a melody is dependant on the other building blocks, for example the harmony. Essentially, "the melody is this" (or rather, is perceived as "this") with one harmony but with a different harmony "the melody is this" (perceived differently). This is in fact a basic compositional tool which has been extensively employed and explored starting around 600 years ago. More directly addressing your point though; there's another common compositional tool, called a "counter-melody", which is essentially a second melody which is subordinate to the main melody. However, depending on how and where this counter-melody is used, depends on whether you perceive it as either: The second melody, the main melody or indeed, not as a melody at all but instead as just a texture or harmony accompanying the main melody. There is an entire branch of music composition purely dedicated to this, called "Counterpoint". Which started being developed around 500 years ago (during the Renaissance) but took nearly a century of development to reach it's peak of sophistication (in what is called the High or Late Baroque period). The greatest master of counterpoint was JS Bach, who at times employed up to 4 different simultaneous melodies, and which one (or ones) we perceive as the actual melody at any particular point in time forms the very basis of the piece of music in the first place! With modern achievements in science and technology and our rapidly changing world, we tend to assume that the depth and sophistication of knowledge say 400 years ago was very simple/primitive compared to today and in the vast majority of cases that assumption is of course entirely correct but it's not entirely correct as far as music composition and perception is concerned. In some/many respects, the situation with music is actually the exact opposite, the vast majority of music created today is very unsophisticated, simple and primitive compared to most of the (surviving) western music created 400 years ago!
2. I'm not sure I agree with that. For example, yes a lion can produce relatively low freq sounds (growls for instance) but when hunting they don't growl, they stalk quietly and the first sound of a "big lion close by" is just as likely, if not more likely, to be relatively high pitched, say a small twig snapping or leaves/grass rustling, etc. We are all predisposed to expect the lowest (fundamental) frequency of any particular sound to be dominant, it's a "pattern" which is so expected that the brain/perception will simply invent that frequency when it's missing. However, under conditions where the number of harmonics is restricted this tendency to invent the "missing fundamental" becomes variable, some people will perceive/hear the missing fundamental and some will primarily hear the overtones instead. It's entirely possible that this principle of the "missing fundamental" is playing a part in what people are perceiving and/or some physiological difference, for example the resonant frequency of an individual's ear canal coinciding (or not) with an important freq/harmonic in the Yanni "pattern", causing their brain to latch on to the Yanni pattern in preference to the Laurel pattern.
we can pretty much rule out culture in a wide sense, stuff like native language. and also rule out playback gears. all thanks to the differences found inside members of a same family.
I don't believe we can "rule out playback gears". If a system is incapable of reproducing lower freqs then we're more likely to hear Yanni, that's why the NYT tool works and most or all of us can hear Yanni using the slider, even if we only ever heard Laurel previously (and vice versa). If a system is bass heavy (or mid/treble light) then we're more likely to hear Laurel. If the playback system reproduces both lower freqs and higher freqs moderately well, only then does it become less of a variable IMHO. I believe there are 3 basic variables at play here: The balance of freqs reaching our ears (which is playback system/environment dependant), the response of our ears to that balance of freqs (if we still have good HF response for example) and the pattern matching mechanism of our brain/perception, which pattern we latch on to at any particular moment in time.
G