What a long, strange trip it's been -- (Robert Hunter)
Jan 18, 2017 at 5:21 PM Post #1,711 of 14,564
Baldr we've always known the four of you are crazy :p

belgiangenius TBH I'm not sure Schiit have ever made an issue of "bitperfect". They have talked about the advantages of not throwing away original samples in the reconstruction process, which is a different thing. I could be remembering this wrong though!

edit: oops, looking forward to Baldr's reply
 
Jan 18, 2017 at 5:36 PM Post #1,712 of 14,564
 
  The Giulini Figaro is the classic, though the Erich Kleiber (also a classic tbh), Colin Davis, Karl Bohm, John Eliot Gardiner, Rene Jacobs, and Nikolaus Harnoncourt all produce excellent readings.
 
Teodor Currentzis recently recorded the opera in Russia, and it's done with such verve that it has to be my current favorite.


I have a soft spot in my heart for Jimmy Levine's/Met/te Kanawa/Upshaw version.  With ladies like that, the men almost don't matter, but they are quite servicable. 
 
Dawn Upshaw on Gorecki's Third is so moving, it inspires me to keep building audio gear.  It is QC on everything I build.


I just listened to "Dawn Upshaw on Gorecki's Third". Its the first Opera album I have listened to, ever,  and boy was it moving! I guess this means I have some catching up to do on the Opera side of things.
Yggdrasil arrives tomorrow so I will probably listen to it again, I heard rumors that it would make music sound better than through my Modi 2u so I guess I will hear for myself.
 
Jan 18, 2017 at 6:10 PM Post #1,715 of 14,564
Maybe Mike should forget advancing the state of the art in digital audio and start driving a taco truck around LA like he's always wanted.
 
Bbbbbbbbut if he'd rather make the Manhattan, I'm sure I wouldn't object to beta-testing its effect on Wagner. Are its effects on period recordings different from those on pristine contemporary recordings?
gs1000.gif

 
Jan 18, 2017 at 6:29 PM Post #1,716 of 14,564
  So here comes the tease: OMG wait until you hear this. There is no way to describe it other than the perception shifts in a uniquely seductive way. The next step is to get this buffered and running in a DSP processor so we can make it portable and expose it to others just to make sure the four of us here aren't crazy.

 
Ha!  I hate to be the one to break it to you, but you may already have a fail on your last premise. 
wink.gif

 
Jan 18, 2017 at 6:59 PM Post #1,718 of 14,564
 
So the final product sounds like a DSP processor that you plug inbetween your transport and your DAC that applies some kind of effect to the sound.
 
...but that would be the end of bitperfect, no?

 
As someone with zero knowledge of technical things/terms I have no idea what this means - would anyone be able to "translate" that into layman's terms? I sort of grasp the concept of a DSP processor but what does it mean exactly? I've googled around but it's still fairly complicated to process in my head as English is not my first language and I've never had technical terminology training.
 
Jan 18, 2017 at 7:31 PM Post #1,719 of 14,564
   
As someone with zero knowledge of technical things/terms I have no idea what this means - would anyone be able to "translate" that into layman's terms? I sort of grasp the concept of a DSP processor but what does it mean exactly? I've googled around but it's still fairly complicated to process in my head as English is not my first language and I've never had technical terminology training.

 
It means that up till now, Manhattan required to pre-process the content offline - in its entirety - before playback.
 
The DSP is an embedded, low power, (potentially) low-cost processor that allows to build a real-time digital box that performs some signal processing. Audio processing, in this case.
I will allow to implement the Manhattan algorithm, live, in a box that the user can insert between the audio source and the DAC. Kind of like miniDSP box, in which user can implement all sorts of digital audio processing: room correction, cross-over, filters, etc.
Manhattan will implement brain (or mood) correction, which is much more powerful!
 
The algorithm adds a rather long delay (1-3 s) to the audio path, however, which makes it not suitable for home-theater application (audio will be out of sync with video).
 
Jan 18, 2017 at 7:44 PM Post #1,720 of 14,564
Example of Manhattan's audio conversion in action:

(Spouse) "No you CANNOT buy Schiit's next piece of audio gear!" --> Manhattan --> "Darling, of course you can buy as much Schiit as you like :)"
 
Jan 18, 2017 at 7:52 PM Post #1,721 of 14,564
 
It means that up till now, Manhattan required to pre-process the content offline - in its entirety - before playback.
 
The DSP is an embedded, low power, (potentially) low-cost processor that allows to build a real-time digital box that performs some signal processing. Audio processing, in this case.
I will allow to implement the Manhattan algorithm, live, in a box that the user can insert between the audio source and the DAC. Kind of like miniDSP box, in which user can implement all sorts of digital audio processing: room correction, cross-over, filters, etc.
Manhattan will implement brain (or mood) correction, which is much more powerful!
 
The algorithm adds a rather long delay (1-3 s) to the audio path, however, which makes it not suitable for home-theater application (audio will be out of sync with video).

 
This is exactly what I was looking for, thanks so much for taking the time to put that together and respond to my query. I understand the basics of the device now and I'm definitely looking forward to it becoming a reality, can't wait to be able to experience something like that in person, as per the mental visualization of the effect it has provided by Mike a few posts (or pages) back it sounds amazing.
 
Example of Manhattan's audio conversion in action:

(Spouse) "No you CANNOT buy Schiit's next piece of audio gear!" --> Manhattan --> "Darling, of course you can buy as much Schiit as you like
smily_headphones1.gif
"

 
LOL, now THAT is revolutionary.
 
Jan 18, 2017 at 8:17 PM Post #1,722 of 14,564
I was curious -- are there plans too / have you already released any whitepapers detailing any aspects of your non-Parks-McClellan filter design? That's basically what most digital filter designers do these days (at least for FIR / Chebyshev) as it optimizes your passband/stopband ripple to what is understood to be the "optimal" solution. Obviously, this is your secret Multibit sauce, but a whitepaper would be super interesting (from my POV). I was lucky enough to have a professor in grad school who studied directly under McClellan, so we definitely spent a lot of time on the material and I'd never really thought about doing filter design in any other way.
 
Also curious if the Manhattan project requires graduating to a FPGA to handle the complexity of your DSP-design; I imagine at this point you have a stable BSP for the SHARC and quite a bit of experience with it, but I suppose we'll all find out soon...
 
Jan 18, 2017 at 8:27 PM Post #1,723 of 14,564
  I was curious -- are there plans too / have you already released any whitepapers detailing any aspects of your non-Parks-McClellan filter design? That's basically what most digital filter designers do these days (at least for FIR / Chebyshev) as it optimizes your passband/stopband ripple to what is understood to be the "optimal" solution. Obviously, this is your secret Multibit sauce, but a whitepaper would be super interesting (from my POV). I was lucky enough to have a professor in grad school who studied directly under McClellan, so we definitely spent a lot of time on the material and I'd never really thought about doing filter design in any other way.
 
Also curious if the Manhattan project requires graduating to a FPGA to handle the complexity of your DSP-design; I imagine at this point you have a stable BSP for the SHARC and quite a bit of experience with it, but I suppose we'll all find out soon...

 
Ditto, studied applied physics with an emphasis on signal theory, so would also be interested to learn more about the math.
 
Jan 18, 2017 at 9:14 PM Post #1,724 of 14,564
Good question -- but it takes some writing time to address.  I will get to it.


Allow me... Single bit conversion is an approximation of the original signal, with the implication that it may be, and most likely is, less intricate than the original sample. Whereas Manhattan is a deliberate distortion, or 'alternative' relationship of the interactive elements of the music as pitched, which may hearken subliminal rhythmic responses of a primal or synergistic level. Or not.

But then, what the hel do I know? :wink:
 

Users who are viewing this thread

Back
Top