1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.

    Dismiss Notice

*Official Schiit Magni/Modi 2 ( Uber ) Thread*

First
 
Back
63 64 65 66 67 68 69 70 71 72
74 75 76 77 78 79 80 81 82 83
Next
 
Last
  1. stealthshadow1
    Agreed there are times I can not tell difference between headphones back to back on the same song.
    A lot of times it seems equipment makes the difference.
    To me a lot of the headphones I own tend to take on the sound of the equipment I use.
    My ears are not trained to the level to distinguish probably much above a midi range for sure.
    I have tried super high end models and can't tell any or hardly any difference.
    That does not mean there is not a measurable difference of quality with high end gear.
    What it does mean is I can't detect it.
    I too have schiit gear and am happy with it.
    The made in the USA thing means a lot to me.
    Not just from a manufacturing department but the parts and boards are sourced here too.
    That holds value with me.
    That dark voice has been so tempting but I ignore it. 
    There are probably petty of better equipment options than schiit and I am fine with that.
    I am building schiit stacks for office and home for me personally. 
     
  2. Argo Duck
    Scientific 'facts' in isolation are meaningless. It is models and theories that give them meaning, which varies from model to model and scenario to scenario.

    I grant the 'fact' about human echoic memory, but within what model of listening does it imply we can't reliably hear differences? And what kind of model might suggest that we might reliably detect differences, assuming anecdotal 'evidence' is allowed credibility just for argument's sake?

    How does our sophisticated and highly reliable ability to comprehend human speech - delivered with mispronunciation and in many accents, differing pitch emphases and stresses etc etc - factor into this given the 'facts' of human echoic memory? Why don't we require a refresher course every day or every few seconds in order to recognize even the simplest words?

    IOW, how does the short-term memory (STM) system factor into long-term memory (LTM)?

    Or perhaps these questions are a "blind" alley :D IDK, just putting them out there.
     
    Chris J and reddog like this.
  3. DjBobby
    The matter of hearing small audible differences is a matter of some training. Just because there are people who can't taste much difference between different wines, doesn't mean that everybody can't taste it. I've actually met some wine connoisseurs who were able to unmistakably describe wines, sort of grapes and regions of origins, with a single blind tasting. There actual tasting memory could obviously reach much longer than a few seconds back in time. For most of us it was kinda a party trick, we couldn't feel that much. There was no magic trick, in their own words, it was a matter of training the senses for many years, and saving the data bank. I would assume the same goes for listening to the sounds we call music. There are people whose ears are so sensitized to smallest audible differences, which will make them comparable to the best of the wine connoisseurs. That said, it doesn't mean that most of us who don't go so far, could still not enjoy some good listening as well as drinking good wine. Or in reverse order, you choose. In this sense, cheers everybody.
     
    landroni likes this.
  4. StanD
    Taste, smell, sound, touch, vision, etc are all different sensory systems, so I don't think assuming that an analogy formed between one and another is a wise means of establishing how any one of these work. We are not super beings and as difficult as it may for us to assume, we do have finite limits. Yes I'm sure they vary somewhat for each individual but I wouldn't expect huge differences. If you happen to enjoy whatever it is that you have, by all means continue to do so. Claiming to be an Ubermensch is a step onto a slippery slope that will be difficult to properly substantiate,
     
  5. Koolpep
     
    I think nobody claimed ubermenschly senses. Example: I have really great far sight. Nearly overtime I am the only person who can read a signboard far away. Just don't ask me to read anything close by, LOL. Not much of a superhuman but the auditory sense is one of the least researched, so I wouldn't be surprised to read some very new and interesting findings over the next few years.
     
     
    It's also a very well established fact that some people can hear better than others. Like people can taste differences better than others, see better than others, feel (haptic) better than others etc. Here is a very interesting Q&A with Thorsten Loesch from AMR/iFi :
     
     
     
    There is also some debate over the benefits of higher PCM sampling rates. Some claim, Monty Montgomery being one example (http://xiph.org/~xiphmont/demo/neil-young.html), that 192kHz is actually a step down in sound quality from lower sample rates. What is your position on higher sample rates including 192kHz and DXD?
    Well, Mr. Montgomery has a certain point, insofar that the human hearing has limitations. He may not be quite so accurate as to the actual limits of what we can hear and perceive. 

    The human hearing mechanism is a marvel. It uses entirely digital “transducers” (hair cells) coupled with an incredibly non-linear acoustic system (the ear canal, diaphragm, attached bones sinews etc. It even amplifies tiny sounds using positive feedback, which if it goes off track is one of the causes of tinnitus. Sometimes the ear oscillates at such SPL’s that a person standing next to a sufferer can hear the ringing! And then the digital signal obtained is processed with what amounts to an analogue computer (the brain) with a substantial learned response to sound. 
     
    image: http://cdn.audiostream.com/images/4314thorsten8.jpg
    [​IMG]
     
     
    The human auditory system illustrated
    If the human hearing was an (electro) mechanical sound recording and analysis system it would be considered as broken by design and completely useless – yet at the same time it endows us humans with the facilities for some exceptional feats of acoustic analysis (we casually call it “hearing”). In fact, we have so far not produced a viable mechanical hearing prosthesis that can be “jacked” into the nervous system, so we really do not understand the human auditory system sufficiently to replicate it mechanically.
    Indeed, the human (and to a degree animal) hearing may serve equally well as argument for and against “intelligent design”. Usually nature evolves the simplest possible solution to a given problem. Extreme elaboration is extremely rare. So the extreme complexity and anti-simplicity of the human hearing could only been elaborated by an intelligent designer. Yet equally only an utter madman would design such a Rube Goldberg’ish contraption as the human hearing to equip a being with an acoustical sense, so it must have been the blind force of evolution. 
    Leaving metaphysics aside, we have evidence for example for the perception of ultrasonic content in music in the research of Oohashi et al. Lee/Geddes, J.J. Johnston and many others continually push the boundaries of our knowledge what and how we hear. Much of the cutting-edge research suggest that we both over-estimate and underestimate the human hearings discrimination in all domains and the commonly accepted limits be it in frequency or level are not particularly accurate. So much work still needs to be done before we can have confidence in asserting what can be heard and what cannot be heard.
    "So much work still needs to be done before we can have confidence in asserting what can be heard and what cannot be heard."
    If we look strictly at the electrical signal, it is easy to see that higher sample rates and greater word-length improves the resemblance of the recorded electrical signal to the acoustic original. Coupled with suitable electronics and loudspeakers or headphones we can certainly claim that we can create a sound field that more closely resembles that present at the original acoustic event with higher sample rates and greater wordlength. 
    Until we have a reliable working model of the human hearing system (which means that we no longer need amplifiers, speakers, headphones etc., but simply can “jack into the nervous system” instead) the smart money rides on maximising the resemblance to the original acoustic event and thus the sample rate and word length, especially as it is not that difficult to achieve any longer.

    Read more at http://www.audiostream.com/content/qa-thorsten-loesch-amrifi#kCmVUTwFrK0K4V8B.99
     
    Edit: highlighted two sentences in bold
     
  6. Krutsch
    A-a-a-h-h-h ... Yawn.
     
    In thread related news, my Modi 2 Uber should arrive by Thursday and I am pumped to try it out with my Mac Mini and the 2-channel system. It's replacing a Dragonfly that I want to put back into my laptop bag for work.
     
    We now return to our regularly scheduled programming: subjectivists vs. objectivists or biology for laymen.
     
    bixby, romanesq and alazhaarp like this.
  7. chipwelder
    [​IMG][​IMG]
    http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124
     
  8. Chris J
    :D

    Here's where I crank up the irony:

    Not a good idea.
    If that happened I would start spending more time in Audiophile forums.......:D
     
  9. Chris J

    Edit:
    You and I must agree to disagree.
     
  10. StanD
    How much better can it really get?
    A product comes out and everyone sings praises as to how much better it is than anything else before. And so, many enthusiasts proclaim, "I gotta have one." Eventually things calm down and everyone finds something else to get excited about.
    A year later the same company comes out with a "newer better version." Once again excitement boils over and everyone's gotta get one of these. Eventually things calm down and everyone finds something else to get excited about.
    The next year, the same scenario plays out yet once again.
    Yep, you guessed it, same story. Year after year the same story plays out, again and again. Eventually we as humans will need to be upgraded to tell the difference, or convince ourselves that we are experiencing something.
    There was a time in the distant past when Class B SS Amps inflicted upon us crossover distortion and high levels of IMD. There was a time when tube amps had crappy output transformers and very noisey designs. We had cheap turntables with ceramic cartridges. Fast forward to today, wow things are pretty darned good. Technology has improved by leaps and bounds, however, we are still human beings. Is it possible that year after year audio products can continue to improve with such noticable differences, with no end in sight. I don't think so. Think about it.
     
    If you have a Modi or Magni, please continue to enjoy them.
     
  11. bikerboy94

    Good one. Enjoy your new Schiit.
     
  12. Koolpep

    Yes, enough is enough. Technological progress be damned. Why improve things? The old ones worked just great. 
     
    On Schiit related stuff, I liked the Modi2Uber better than the Modi I had, so I sold the Modi. Not sure if it sounded better, but the versatility was greatly enhanced thanks to the optical/coax inputs.
     
    [​IMG]
     
    Cheers,
    K
     
  13. derbigpr
     
     
    Your questions are not a blind alley, they're simply questions that lead me to believe you have to grab a neurophysiology book and you'll get the answers, because at this point you're walking in circles trying to understand something extremely complex without understanding the basics. It's like trying to understand differential equations without knowing how to count to 10 and add or subtract. You're oversimplifying the human hearing,  we don't just hear "sound" as a general phenomenon. There are various parts of the auditory cortex that deal with various types and parts of sound, speech being one of them. Yes, there is literally an area which ONLY handles words that you hear , and is only active when you hear words, because it's only activated when the previous parts of the path filter the words out and send it into that part. If you damage that part, you can't understand speech anymore. You can hear the words, but you don't understand what they mean.  Our hearing is not highly reliable, it's in fact completely opposite to that when it comes to subjective impressions, especially something as subjective as "sound quality".  Words are not subjective to our brains, so you can't compare that to an actual sound of the word. Meaning of the word and sound of the word are handled by completely different areas of the brain. Again, this is all really difficult to explain without going into detail that require medical knowledge, when I've studied about hearing in med school, there was a section about 50 pages long in our physiology book, but very, very densely written and only the most important parts of it, sort of like a summary, and some parts were really difficult to wrap your head around, especially when we know how it works from observational point of view, but don't understand the exact mechanism. On top of that, each part had plenty of references to other books, which means there are literally thousands of pages of books and scientific papers written on this topic. All you need is time to read it.
     
  14. derbigpr
     
     
     
    We're pretty much at a point where big improvements when it comes to DAC's are no longer possible.  Soon, in a few years, cheap 100 dollar DAC's will offer the same quality as DAC's that cost thousands of dollars today, in fact, we're close to that today, but those high end manufacturers won't tell you that, I mean, who will pay for their ultra expensive DAC's then? The differences between best and entry level nowadays is not as big as people assume when it comes to digital sources. We're talking about fine changes and really small improvements.
     
    DAC's have pretty much hit the ceiling, the future won't bring much in terms of their evolution. The future of headphone audio is in adaptation of headphones to each and every individual, as well as lots of digital sound processing that will create an illusion of really listening to real sounds, not something coming out of a small speaker next to your ear. Sort of "surround sound", but done properly, something that will sound absolutely 100% real.  It's pretty much very similar to video technology. We've pretty much hit the ceiling when it comes to TV's and monitor's, from a technical point of view. Pixels are small enough, colors are ultra-precise, contrasts are infinite, refresh rates are more than good enough, etc. The direction we will go now is 3D, mainly virtual reality, applying that technical ability that we have in order to not watch a flat screen that kind of looks like a window into a different reality anymore, but instead to wear glasses our heads that will really make us see and be in that different reality, or at least make it feel 100% real. Sound will go in exactly the same direction. Stereo will die out, fixed sound recordings will die out,  3D sound recordings will be possible, I mean 3D in the sense that we will be able to move, at our will, through the virtual sound stage. In 50 years we'll be  able to simulate every single sensation, including the sensation of movement while being still. That will lead to incredible evolution in sound and listening to music, imagine being able to move on a virtual stage where musicians play music, and it looks and sounds exactly as if you were really there.
     
  15. Byronb
    Excellent job of finding the silver lining!! I salute your efforts... [​IMG]
     
First
 
Back
63 64 65 66 67 68 69 70 71 72
74 75 76 77 78 79 80 81 82 83
Next
 
Last

Share This Page