Yes, exactly. His theory is OK in principle if he could record two analogue versions of the track via different cables, and slide them continuously against each other and subtract when they aligned. There would be a fairly deep (but not zero I think) null.
However once he samples and digitises the analogue DAC output, the he can only align them at specific samples. The sampling clock will not align with the audio waveform unless you take some specific measures. So he will inevitably see a non-null comparison as a result of the method.
When he tested the method by recording one cable twice and tried comparing these, it showed a non-null comparison with the same cable as a result of the test method problem, as I expected. But he proceeded to blame the cable for this rather consider it might have been his method. He is wrong to conclude what he concluded. There is no valid conclusion here other than his method may have a defect he hasn't accounted for well enough. I actually think it's does have ... not may have ... but I will listen to argument.