Blue Warper
100+ Head-Fier
- Joined
- Apr 28, 2015
- Posts
- 120
- Likes
- 77
It was a very big Manhattan day today.
Until today, no one at Schiit had heard the Manhattan except for me, Ivana, and Dave. Since I initially considered the notion almost two years ago, there was little external validation of the concept other than a half assed undisclosed and concealed proto at the first Schiit Show. We have been working on the basic engine for the Manhattan, with the dominant effort of making it perfectly sonically transparent, since it involves quite a bit of processing.
Today, just to be sure, I brought it into Schiit for several new pairs of ears to audition, with no one knowing exactly to what they were listening. Out of this comes two significant conclusions:
1. The Manhattan is a huge upgrade for any music which ever lived in the real world or is based upon structured music theory; that would be pop, rock, jazz, classical, etc. For monotonic or arbitrary music such as many forms of techno or rap, zero to less so.
2. Every person who blindly heard it todayhas indicated a strong preference for the "Manhattan processed" output, even when using very high quality source material such as the Muddy Waters Folk Singer Mobile Fidelity Gain System recording.
3. The Manhattan Project DSP engine is successful as well as complete. Well done, Ivana!!
Dr. Ivana will make her Schiit public debut at Canjam LA.
First of all, thanks, Mike, for your very welcome update. What you said about Manhattan actually working for music "based upon structured music theory", while being negligible for the rest of it, made me feel excited, because this is what I thought Manhattan would be all about.
I think no one (I mean, perhaps not even in the labs) has ever achieved what you guys have. So congratulations for what appears as a truly legendary milestone!
What I didn't expect was Manhattan's impact on modern music like pop, rock, jazz, which was conceived with instruments tuned the 'modern' way.
So, please let me ask you this question: what about human voices? Do they still sound real? I've got no doubts about the quality of your DSP processing: I've no fear your implementation isn't going to be absolutely transparent (and your reference to MFSL's Muddy Waters is testament to that). Just wondering how a singer's voice can be 'autotuned' in order to go along with the rest of the music playing, yet keeping its naturalness, its timbre and character. Eagerly waiting to hear this marvel!
Also, and similar to the above question, what about (at least analog) synthesizers? I do listen a lot to prog music (70's bands like Pink Floyd, King Crimson, Yes, Genesis and the like), so I'd be curious as to the results with Manhattan in the chain, in this case. But I assume Manhattan's rendering can be toggled with some switch, so one can always choose how he/she wants to listen to his/her music.
Finally, for your public debut at Canjam LA, do you plan to have some Manhattan 'box' or will you still be doing the DSP maths on a PC for that event?
Thanks for your update and your (possible) reply!