what is this site?
I've just read a load of twaddle about the squeezebox. The artical was full of irrelevant specualtion and bad spulling.
"If you like detailed sound that is devoid of colouration and interpretation, then it's easy to be satisfied with a computer system because accuracy is pretty easy for a computer. Delivering bit-perfect data is its main task after all."
Something different than SpotifySo, if I'm using Spotify on an XP PC, via a USB DAC, what do I need to do to 'maximise' sound quality?
You know it occurs to me that bit-perfect is actually a bad thing. As we know, a lot of music is recorded with peaks reaching 0dB FS and this causes great distortion in over-sampled DACs. So, any recordings that hit 0dB on a regular basis will sound significantly better if the output level is reduced by about 3dB - 6dB in the digital domain before it reaches the DAC. No longer bit-perfect but much less distortion!
Most digital signal processing requires maths far beyond my ability to fully understand, but my basic understanding is that the process of oversampling requires some headroom to work, just like sample rate conversion the waveform changes shape and can create even higher peaks.
I'd advise at least -3dB normalization because the analog waveform can peak quite a bit higher than the maximum digital sample.
The maths does require the internal calculations to be more than 16 or 24 bit, which is why many DSPs operate at 32 or 48 bit or even floating-point internally
S.
But doesn't any 16-bit signal still get dithered to a higher bit-depth with 0dB FS remaining at 0dB FS? There is thus no extra headroom in the digital domain, only higher resolution.
The problem, I think, stems from the fact that the highest level digital sample does not always represent the highest point of the analog waveform. When the signal is then processed, for example in up-sampling, new digital samples may need to be created that are higher, but since the digital data was already normalized to 0dB FS this can't happen.
I can't see that working in a real-time application like a DAC, as it would cause a level shift at the output after conversion, unless the analog circuit compensated for the digital scaling.
This video explains the problem better than I can - http://www.youtube.com/watch?v=BhA7Vy3OPbc
It doesn't need to because the dithering back to 16 or 24 bit before the analogue conversion restores levels.
S.
The distortion from inter-sample overs is generally a non-issue.
If the recording was heavily limited or even shredded to begin with, then the additional distortion from the inter-sample over will not be noticed in the forest of that limiting.
If the recording reaches only occasionally 0dBFS in an illegal way then the amount of inter-sample distortion will be tiny (the delta amplitude), instantaneous (a few samples at most), occasional (only the recording's very highest peaks), and masked (by said peaks).