I am curious to have professional explain it to us. An FPGA is a programmable digital device. As such, it can do all the digital aspects of the DAC chip - filtering, segmentation, etc. But it can't produce an analog output - at the end of the day, there has to be a transition circuit that actually produces the "music." The only exception to this I know of is the DSD decoding, where the nature of the digital format makes the transition to analog very simple - this is what PS Audio does.
My understanding of the Chord "magic" is the very high pole number reconstruction filter that is claimed to combine best aspects of the "stock" filters that are commonly available. As the processing power of the FPGAs increased, Chord made their filters more complex and ostensibly better.
FPGA processing of digital input data is actually quite common in the DAC world. My understanding is that Chord takes to a much more complex level.
Modern monolithic DACs are actually VERY complex devices that use very sophisticated techniques to achieve their very high S/N and bit depth. It is actually VERY difficult to make a discrete DAC that has similar performance.
First up, i'm a professional DSP engineer, not a chip designer, or FPGA programmer, so I can cover some of this, but not all of it.
The FPGA can be used as a DAC simply because it can produce a bitstream, a fast switching digital output which by varying the amount of high and low bits can represent voltages by the bit density (PDM - see here
https://en.wikipedia.org/wiki/Pulse-density_modulation)
For 16 bit audio at 44.1Khz, without clever noise shaping, you need to basically double the sample rate for each bit, so 15 bit @ 88.2Khz can represent the same voltages as 16bit @ 44.1Khz, so at a basic level, you can get 16/44.1Khz quality with a 1/1.4Ghz signal (44.1 * 2^15). Basically, 32768 * oversampling. This will have a flat noise floor across the audio spectrum below the level of the 16 bit data. Now we can improve on this by shaping the noise, which is the delta/sigma approach (feeding back the error to change the output bit) and by doing this we can shift the noise up about the original 22.05Khz nyquist frequency, and by doing this, achieve the same results without needing such a massive oversampling factor. There's lots of strategies, with different computational costs, and other tradeoffs, but the philips bitstream setup achieved the same quality with 64* oversampling, which is a much more manageable 1/2.8Mhz).
Now for higher bit depths, and higher sampling rates you need to increase the frequency, but you get the idea.
The FPGAs operate in the 200Mhz or so range (the larger ones are slower, the smaller ones faster). So, flipping an output bit at 200Mhz gives us plenty of wiggle room to generate high sample rate and high bit depth audio using the same philips delta/sigma approach that has been well explored. I imagine the chord box is performing high order noise shaping, and obviously it's long length FIR filtering too, but this is all easy stuff for an FPGA, actually, it's easy stuff for any computer.
The overall output quality will then be determined by the clock source for the FPGA, so a high quality stable clock will be needed to avoid modulation type errors creeping into the output.