advertisement


The truth about bit depth in digital

This is just a theoretical mathematical situation. Not a reflection of reality. The reality is we do know exactly what the time is, it's determined by the sampling rate, so barring the potential error in the sampling rate clocking, quantisation error is purely an amplitude error.
I've no idea how it is perceived by the brain though, I suspect that is largely unknown ;)
 
  • Like
Reactions: gez
This is just a theoretical mathematical situation. Not a reflection of reality. The reality is we do know exactly what the time is, it's determined by the sampling rate, so barring the potential error in the sampling rate clocking, quantisation error is purely an amplitude error.

A typical DAC master clock is good for better than 100ps jitter (note this is DAC sampling clocking jitter which is impossible for any reviewer to measure directly, as opposed to the jitter of the DAC at it's output terminal which is not the same thing), which even at 192khz sampling is only a 0.00192% error in the time domain. How much could a 20khz sine wave vary in amplitude with such a time variation? (that's a question for the hall, I don't know). Or more importantly is that variation enough to alter the quantisation value (i.e more than 1/2 a bit at 16bit resolution - or any resolution even).

Not done it, but should be possible to work it out by regarding the 'jitter' as FM modulation of the 20kHz 'carrier'.
 
A few observations from the start of the video.

Digital Noise is often referred as a result of error. The inverted summation is just that - a representation of the error between the two different recordings. The fact that that noise is audible, shows a significant level of error in the lower quantised recording.

The distribution of error may well be flat, and hence sounds like noise rather than music. The reason it is likely to be flat is because some samples of the lower quantised recordings will be closer to the 24 bit recording, and some samples will be further away from the 24 bit recording - likely to be distributed as a randomised spread.

The same goes for recordings with differences in sample frequency. In that case the error is frequency modulated, whereas with quantisation the error is amplitude modulated.

Andrew
 
As with high sample rates, I think high but depths are mainly of interest at the recording side of the music reproduction process. The lower noise floor of 24-bit versus 16-bit provides greater headroom, so that achieving a recording that is well above the noise floor of all the devices in the signal chain (one or more of: mic, preamp, EQ, compressor, patchbay, mixer, interface, etc.) without clipping the digital signal is almost fool-proof. Recently, more 32-bit recording devices are being released, not because they sound better but because for all intents and purposes you don't have to worry about recording level since you can safely apply whatever gain you need in software.
 
As with high sample rates, I think high but depths are mainly of interest at the recording side of the music reproduction process. The lower noise floor of 24-bit versus 16-bit provides greater headroom, so that achieving a recording that is well above the noise floor of all the devices in the signal chain (one or more of: mic, preamp, EQ, compressor, patchbay, mixer, interface, etc.) without clipping the digital signal is almost fool-proof. Recently, more 32-bit recording devices are being released, not because they sound better but because for all intents and purposes you don't have to worry about recording level since you can safely apply whatever gain you need in software.

Yet most DACs nowadays perform internal DSP, and 44.1kHz puts the brickwall frequency too close to the audio band (which is why differences between filters are audible).

Many in the industry have been shouting for years that 18-bit and a sample rate closer to 88.2/96kHz would be audibly transparent.
But it's too late now when Redbook is the standard.
Most of my music comes from ripped CDs, I just need really good Redbook D/A conversion.
 
I’ll take sampling rate over bit-depth everytime
I think I prefer a little of both.

The way I see it is that early digital audio was probably sampled at 44.1 kHz, converted to 16 bits, and then mixed and mastered in the very same PCM format. Any better would have been somewhere between difficult and impossible at the time. This, IMHO, has two problems:

1. A good analogue anti-aias filter, to pass up to 20 kHz undamaged and reject 22.05 kHz (or even a more relaxed 24.1 kHz) is no simple device. I have no evidence but it is possible that simple filters could do some audible harm.

2. Doing the mixing and mastering with just 16 bits needs understanding of digital audio and care to avoid audible low-level artefacts or noise.

AIUI (from reading a recent studio good practice standard) modern studio sample rate will usually be at least 96 kHz and samples will be taken at 24 bits even if the final delivery to the customer is finally down-converted to CD format.

Now, the analogue anti-alias filter is much simpler, and there is much more margin for error in mixing and mastering. Furthermore digital audio workstations probably now enshrine a lot of esoteric digital audio good practice and probably make it more difficult for a less-than-expert-in digital-technology operator to make mistakes, leaving him/her to be more focussed on the music and sound.

IME modern classical digital audio releases even down-converted to CD standard are uniformly excellent and seem to have much more detail than many (but not all) early CDs. I think this is at least consistent with the above view even though not proof. Maybe someone with audio production experience might agree ot disagree.
 
1. A good analogue anti-aias filter, to pass up to 20 kHz undamaged and reject 22.05 kHz (or even a more relaxed 24.1 kHz) is no simple device.

We've learned that sometimes what looks good in theory (Nyquist–Shannon) cannot be put to practice.
 
IME modern classical digital audio releases even down-converted to CD standard are uniformly excellent and seem to have much more detail than many (but not all) early CDs. I think this is at least consistent with the above view even though not proof.

This may also be partly due to the use of multi- and closer-miking techniques.
 
I think I prefer a little of both.



AIUI (from reading a recent studio good practice standard) modern studio sample rate will usually be at least 96 kHz and samples will be taken at 24 bits even if the final delivery to the customer is finally down-converted to CD format.

If the end-aim is 44.1k/16 then I'd tend to prefer 88.2k/24 when recording. Easier downsample process ratio than the relatively 'clumsy' one from 96 to 44.1.

That said, I suspect the reality is that - whatever it says on the front lights, the actual ADC hardware process will be low bit and very high rate to synth the conversions with better linearity, etc, than a specific high-depth setup at the target rates.
 
As with high sample rates, I think high but depths are mainly of interest at the recording side of the music reproduction process. The lower noise floor of 24-bit versus 16-bit provides greater headroom, so that achieving a recording that is well above the noise floor of all the devices in the signal chain (one or more of: mic, preamp, EQ, compressor, patchbay, mixer, interface, etc.) without clipping the digital signal is almost fool-proof. Recently, more 32-bit recording devices are being released, not because they sound better but because for all intents and purposes you don't have to worry about recording level since you can safely apply whatever gain you need in software.
The real bit depth is defined by the signal to noise ratio of course.
 
We've learned that sometimes what looks good in theory (Nyquist–Shannon) cannot be put to practice.

Depends on what you mean by 'cannot' and the S-N maths. *Any* 'measurement' has its limits. In this case set by the sample rate, bit depth, and recording duration. That also applies to the transmission of sound in air as well. However the reality is that a good digital ADC - DAC arrangement can have far smaller effects that the ones caused by, say, studio mics, or where the performer stands wrt the mic, etc, etc.


Nor can we make amps that have absolutely zero distortion and infinite bandwidth. etc. Nor can our ears detect some things, etc. The reality is that we can make systems that are good enough to enjoy the results.
 
This may also be partly due to the use of multi- and closer-miking techniques.

As shown by analysis of early CD recordings (and some newer ones) all too often suffered from:

1) Early digital recording kit was flawed

2) Processing in digital form was flawed when 'adjustments' were made before getting to the final release version.

3) Some early recordings were made with less than 16 bits per sample, and at rates that were 'odd' in modern terms. i.e. NOT 44.1k or 48k. And early sample rate concersions were crudely done. e.g. I think one system (Decca?) used 50k with less than 16 bits/sample.

Oh, and some CD recordings were made with pre-emphasis, which not all players could detect. Also some made without it, but flagged as if it was. (BBC Music Mag cover discs for a while were like that!)

The problem often isn't the kit, though, it is someimes the idiots who use it to make the recording and then the CDs. Who were (and sometimes still are) utterly clueless what the das knobs and blinken-lighten do.
 
As shown by analysis of early CD recordings (and some newer ones) all too often suffered from:

1) Early digital recording kit was flawed

2) Processing in digital form was flawed when 'adjustments' were made before getting to the final release version.

3) Some early recordings were made with less than 16 bits per sample, and at rates that were 'odd' in modern terms. i.e. NOT 44.1k or 48k. And early sample rate concersions were crudely done. e.g. I think one system (Decca?) used 50k with less than 16 bits/sample.

Oh, and some CD recordings were made with pre-emphasis, which not all players could detect. Also some made without it, but flagged as if it was. (BBC Music Mag cover discs for a while were like that!)

The problem often isn't the kit, though, it is someimes the idiots who use it to make the recording and then the CDs. Who were (and sometimes still are) utterly clueless what the das knobs and blinken-lighten do.

Many of Denon's early digital recordings sound fabulous, for example Starker's recital from 1979.
 
Depends on what you mean by 'cannot' and the S-N maths. *Any* 'measurement' has its limits. In this case set by the sample rate, bit depth, and recording duration. That also applies to the transmission of sound in air as well. However the reality is that a good digital ADC - DAC arrangement can have far smaller effects that the ones caused by, say, studio mics, or where the performer stands wrt the mic, etc, etc.


Nor can we make amps that have absolutely zero distortion and infinite bandwidth. etc. Nor can our ears detect some things, etc. The reality is that we can make systems that are good enough to enjoy the results.

I mean that 22.1kHz is too close to the audio band which makes it impossible to have an effective Redbook filter.
You either get roll-off below 20kHz or aliasing.
 
I mean that 22.1kHz is too close to the audio band which makes it impossible to have an effective Redbook filter.
You either get roll-off below 20kHz or aliasing.
Modern digital filters do it (in modern ADCs and DACs). My beefs are that such steep filters ring more, and in this case near the audible band too - even supposing a perfect implementation.
 


advertisement


Back
Top