The flaw in your logic is in assuming that the transmission & receiving method are themselves without blemish (or immune from external influences) & that therefore if any differences exist in the sound of different cables that the fault lies with the DAC.
Regarding DACs, I was only mentioning Toslink due to its rather simplistic cable nature. There's no wire needing isolation and the light itself is either strong enough to be recognized as a valid digital signal, or not. Different Toslink cables of different lengths and quality produce different light intensities (due to heavy losses at the specified diameter and light frequency), but if the difference in light intensity actually makes the cables sound different, then I blame the receiver for not being able to do its job properly.
Come to think of it, I use the logic elsewhere, so it may be just me. If your computer doesn't have a good firewall, is exposed to the Internet and somebody gets in and removes your data, it's your fault. If somebody gets into your email account because of weak password, it's your fault. If somebody kills you at night during a walk, it's your fault for not being able to defend yourself. If the sending side meets the voltage requirements needed to distinguish "0" from "1", it's your fault (as a DAC) for being unable to filter out some extra things you need to care about.
However I agree that doing all this in a DAC itself would just overcomplicate things. As I wrote elsewhere, a better interface between the digital source and the DAC would be much better solution, in my opinion. It would basically remove the need for expensive digital sources.
If the digital transmission/receiving method itself introduces (or can introduce) some variance then is it the DAC that is at fault? Can these cited transmission methods be made flawless in the DAC ?
If they could be made flawless easily, this discussion wouldn't probably exist.
My logic is that as long as the rest of the chain does what it's supposed to, transfers the digital signal according to some standards, but the DAC still needs some extra requirements to perform optimally, it's the fault of the DAC for not being to take care of itself.
If a TV were to display faded colors with an inexpensive digital cable and beatiful colors with an expensive one, I would still blame the TV for not being able to work properly
If you want more understanding of the inner workings & real-world obstacles in digital audio you could do worse than read John Swenson's recent articles about this whole area:
Part 1:
http://www.audiostream.com/content/qa-john-swenson-part-1-what-digital
Part 2:
http://www.audiostream.com/content/qa-john-swenson-part-2-are-bits-just-bits
Part 3: To follow, I believe
Seems to be interesting, thanks, will read it thoroughly later.
The "digital is in fact analog" argument that's often brought up also isn't all-explaining. It explains why there can be analog interference transmitted over a "digital link", but the data themselves still use digital logic, the information transferred is still digital.
The articles also seem to have some fair points,
Most modern DAC chips have a lot of stuff inside creating a lot of noise on the internal power and ground traces, which pretty much nullifies that ultra low jitter clock we are sending it. This is probably why a large number of people have a hard time hearing differences caused by changes to things such as jitter and noise. The effects caused by them are being swamped by the jitter generated inside the chip.