Dynamic range and distortion levels are not the only measures of fidelity. Timing issues are also critical. Sampling theory says that the original waveform can be reproduced perfectly provided the original recording is bandwidth limited to half the sampling frequency. That's fine, most people respond, because we can't hear anything above 20khz anyway.
However, this misses a fundamental point, that music is not a continuous sinewave at a certain frequency, but a series of transient impulses followed by decay. When sampling at 44.1khz (for example) the sampling interval is 22uS. A transient - eg a drum thwack - occurring during that 22uS gap will not be sampled, although its decay will be. The leading edge of most notes - a plucked string, the sound of a piano key hitting the string - will statistically be likely to occur within the 22uS gap, not at the exact point of sampling. This means that a large amount of important information - regardless of the sampling frequency - is never recorded. It also means that digital recordings - particularly early ones which were originally recorded with 44.1 khz ADCs - do not sound quite natural, and have lost important musical information during the recording process which can never be recovered.
Analogue recordings and playback are of course far worse than digital in most parameters, but they do seem to be able to reproduce transient information in a more natural way. Someone once defined music as "the organisation of time" and if vinyl reproduces the timing of musical information more accurately, and more completely, in my opinion this is why it sounds more "real" than digital.