I'll ramble a bit about my understanding. Feel free to correct me, etc.
In principle, a DDC just converts one digital audio signal to another. I²S is what basically any DAC uses internally, so essentially every DAC has a DDC inside already.
Most external signals consist of a single bit stream that mushes everything together, e.g. the samples for both channels arrive in sequence (not in parallel), the clock is embedded as well, etc. - here's S/PDIF, for example (
source):
That's 32 bits for just one channel's sample, meaning the DAC can't even do anything until it has received at least 64 bits.
My rough understanding is that the DAC wants separate streams for each channel that only contain the respective sample data, and another stream for the clock, and whatever else. That's why I²S uses HDMI or RJ45, because those connectors and cables have multiple conductors, needed for the multiple parallel signals. So it converts the external signal to what it needs internally. DACs with I²S allow you to bypass that internal conversion and replace it with something external.
But there are various ways in which a dedicated DDC might help, or even cause issues, with or without I²S. One important aspect is clocking - it's not just important that the DAC sees the correct bits (samples), it also has to change its output voltage at the right time, which is simply implied in the raw data but has to be made real somehow (the file says 44100 kHz, so the DAC has to adjust its output voltage at least every 1/44100 seconds, and the more it deviates from that ideal timing, the more likely it results in measurable and eventually audible differences).
That alone already divides digital inputs into two categories:
Asynchronous, meaning the DAC dictates the timing (most USB inputs, but in principle also DACs that have direct access to the files played, e.g. with an SD card, built-in streamer, etc.)
Synchronous, i.e. the samples come in when they come in and the DAC has to keep up - some USB inputs, S/PDIF, I²S...
Clocks can be made very accurate with a lot of effort like ovens for temperature control, radioactive isotopes, crystals, good power supplies, etc., but the further away the clock is from the device that is getting clocked, the more of that accuracy you may lose again due to interference, cable reflections due to impedance mismatches, etc.
Theoretically the best scenario is the best possible clock right inside the DAC and an asynchronous interface so that the DAC can request the samples exactly when it needs them. A synchronous source can be okay as well since short term inaccuracies can be removed by buffering (at the cost of latency) and internal reclocking, and small long term inaccuracies (say playback at 44099 or 44101 kHz, etc.) are (as far I know) inaudible.
But the best clocks are very expensive, so most DACs have room for improvement. An externally clocked signal + loss of accuracy due to distance can still be better than what the DAC can do internally, though in some cases it might not matter - if the DAC buffers the incoming signal anyway, it potentially replaces the great external timing with its own, mediocre one (but that's not the whole story, see below).
Another area is noise isolation. Irrelevant for TOSLINK (optical), but anything electrical might in principle pick up noise in addition to the raw data, possibly passing that through to the output directly, or maybe degrading the DAC's performance somehow (just guessing here, I'm not an electrical engineer). A DDC might simply be better at isolating that noise and sending the same bits along, without improving the timing, and still it sounds better because of the noise reduction - but a DDC also introduces its own noise, which may be worse than the source's, so it's not guaranteed to improve things.
And then there's switching noise, supposedly, where even with internal reclocking a poorly clocked external signal may make the DAC's receiver work harder, affecting the DAC's overall performance, so giving it a better timed signal helps with that - at least that's a narrative I've read about several times.
In some cases a DDC can also help with quirks of a DAC. For instance, the Yggdrasil mutes itself every time it loses the signal, and takes some time to be sure it has reacquired a signal before unmuting again. The MUTEC MC-3+ USB will simply send null samples when it loses the signal, so when switching between sources with identical sample rates the Yggdrasil doesn't even notice. No clicking, no muting.
With the R26 the sad thing is that it has to synthesize the clock from the internal or external reference, and with 10 MHz I assume that means it essentially has its own clock that it just synchronizes to the external signal.
If you count every 227th tick of a 10 MHz signal you get a ~44,052.9 Hz signal - far from ideal. Something inside the DAC has to make up the difference to 44,100 Hz, so there should be limits to how much an external clock can help here. I suspect there's an internal clock that gets synced to the external one, for instance by adjusting its speed so that when the internal clock has ticked 441 times the external clock has ticked 100,000 times, which means the clock synthesizer's own phase noise should supplant the external clock's (hopefully I got the terminology right).
The Denafrips approach with 45.1584 MHz and 49.152 MHz signals seems easier to make use of:
- Count every 1024th tick of a 45.1584 MHz signal and you get 44.1 kHz
- Count every 1024th tick of a 49.152 MHz signal and you get 48 kHz
- Count every 256th tick of a 49.152 MHz signal and you get 192 kHz
- Count every 16th tick of a 45.1584 MHz signal and you get 2.8224 MHz (DSD)
Which one is better I don't know, maybe the clock synthesizer is more accurate than an external clock + transmission induced inaccuracies, maybe not, maybe it depends on the concrete devices and cables used and the RF situation, etc.
If the R26 doesn't reclock I²S signals it's not hard to imagine that a DDC providing an I²S signal could result in more of an improvement than a 10 MHz clock for the R26, at least in NOS mode... but when it upsamples the signal it needs a faster clock than what's coming in to change its output voltage often enough, and so the clock synthesizer probably comes into play again.
For S/PDIF maybe a faster clock could be extracted since the protocol's bit rate is 64x the payload's sample rate (for two channels).
Bottom line is that we have to try and compare. I'd sure like to know more about the exact inner workings of the R26, how the clock synthesizer works and how it's used, otherwise it's hard to reason what should and shouldn't result in an improvement.