Manhattan update:
I finally have something to update. The very first, proof of concept Manhattan prototypes that we covertly brought to some shows and meets has been replaced! The problem with that proto was that we had an unintended consequence which was undesired parameter alteration resulting in excessive expensive hardware requirements. So we hired Dr. Alphabet (I still can't pronounce her last name – Ivana is much easier.) and now we have a a 2nd generation algorithm which now runs properly – no unintended consequences save the occasional crash (a 2nd derivative glitch, she reports). Because it is an algorithm in search of hardware, it runs only on a BSD OS computer, it is not yet suitable for prime time.
It is also a clunky thing to use, as I have to rip a track, PCM only, process a three minute track, then play the processed track. Since it is still not buffered, it takes the whole three minutes of the track to process plus the latency of the algorithm itself (1-3 seconds – not suitable for home theater, thank God!)
So here comes the tease: OMG wait until you hear this. There is no way to describe it other than the perception shifts in a uniquely seductive way. The next step is to get this buffered and running in a DSP processor so we can make it portable and expose it to others just to make sure the four of us here aren't crazy.
Thanks, Mike, for this exciting update!
Some weeks ago, thinking it over again and again, I believe I had a l33t epiphany, and I felt confirmed in my own speculation about what I *think* Manhattan might be..
Now, as I learn you still can't pronounce dr. Ivana's last name and so resorted to calling her Dr. Alphabet (for now), I can't help being thrilled (and even moved) about this true groundbreaking achievement of your team. I've been wishing for a long time to hear what music can sound like when it is played according those parameters you and Dr. Ivana are trying to design algorithms for.
What you are achieving is a real milestone in music playback. I really wonder how you could even make it come to being... Then I think Manhattan name is even more fitting, 'cause, if I understand correctly, you are really 'disassembling', so to say, the digital representation of a musical recording, breaking it up into its basic constituent pieces (down to the bits level?) in order to reconstruct and rebuild it in that new shape, which your team designed algorithms for, with the help of Dr. Ivana Alphabet's mathematical and musical skills. A bit like the original Manhattan Project dealt with obtaining a lot of energy by using the basic constituents of the matter.
Now, if my assumptions are correct, I don't want to say more, 'cause you know when and what to say, and I don't want to spoil anything. All I can I add is if it won't be you and your team who will succeed and make this happen, then I feel it won't be anyone else (well, at least not any time soon). This is a very special combination of mathematical, engineering and musical knowlegde, skills and passion that might be so unique as to arise perhaps once in a century.
Mike, you know digital audio like maybe no other, you and Jason are first-class engineers, you've got Dave's programming talent, and now you've got Dr. 'Ivana Alphabet' to help you with the hard math. In addition, you don't only like music, but you know the musical theory AND you can play music too (IIRC).
You can change musical playback forever and even overturn current established musical tenets (which are maybe based, borrowing from you, on factoids rather than facts?
). So please go for it!
Peace to you!
NB - Other more technical-related question will follow. I just wanted to say thank you for all your wonderful work!