advertisement


Does analogue ultimately beat digital?

@ABD, the point is both saw and square wave are pathological cases that cannot be sampled accurately at 2x the fundamental frequency.
 
What digitisation is supposed to do and what it actually delivers are unfortunately very different indeed.

I'd alter "actually" to more like "often". In early digital days the kit was problematic at times. Later on the problems have stemmed from idiotic/ignorant/wilful misuses by people who had a hand in what goes between the mics and the recording you get to play back. Technologies improve. But idiots don't.
 
Something to keep in mind that's inherent to digital sampling, is that the higher the frequency, the less information is retained about the shape of the wave. Yes, you can sample a 22.05 kHz wave with a sample rate of 44.1k, but the only information that you're sampling is the amplitude of the peak and trough of the wave.

Wrong. Can I suggest you go to this webpage
https://jcgl.orpheusweb.co.uk/history/ups_and_downs.html
scroll down to the bottom of the page and download a pdf copy of the free textbook. Then read the sections relevant to the Sampling Theorem. This outlines that it is a formal proof that correctly sampled LPCM fully records the original waveform

Any failures of appropriate kit will be due to misuse of flaws in the equipment or failings on the part of the knob-twiddlers using it.
 
... In early digital days the kit was problematic at times. Later on the problems have stemmed from idiotic/ignorant/wilful misuses by people who had a hand in what goes between the mics and the recording you get to play back. Technologies improve. But idiots don't.
True, but as an early adopter of CD I have some seriously good DDD examples from the early 1980s bought in the mid-1980s, so it could be done well. I was listening to a 1982 Decca DDD disc last night and at 40 years of age it is worthy of being amongst today's best.

There was an AES paper I saw complaining that good practice mastering techniques like dither were not always applied in the early days. It's plausible that good practice took time to be widely established. But then maybe it was later on that mastering for loudness and other commercially driven poor practices became a thing.
 
@ABD, the point is both saw and square wave are pathological cases that cannot be sampled accurately at 2x the fundamental frequency.

Its the (analogue) anti-aliasing filter that removes those harmonics - the sampling then captures with complete perfect accuracy* what emerges from this filter.
* Usual assumptions on steep-enough filter & exceeding Nyquist sampling rate, and subject to quantisation noise which is normally miniscule these days.
 
…conversion from analogue into 1’s and 0’s has to take place and then from 1’s and 0’s back to analogue. Looses all that subtlety and naturalness in the process……what a waste of good music.

I had to read that twice to be sure. Irony or belief? I'm still confused tbh. :confused:
 
True, but as an early adopter of CD I have some seriously good DDD examples from the early 1980s bought in the mid-1980s, so it could be done well. I was listening to a 1982 Decca DDD disc last night and at 40 years of age it is worthy of being amongst today's best.

There was an AES paper I saw complaining that good practice mastering techniques like dither were not always applied in the early days. It's plausible that good practice took time to be widely established. But then maybe it was later on that mastering for loudness and other commercially driven poor practices became a thing.

Yes. In early days it was a matter both of the kit used and the people knob-twiddling. Some got it right, others didn't. Some early digital recordings that were for transfer initially to LP before CD was defined were also resampled very poorly. Later transfers did a better job in some cases. So all this is case-specific.

Dither OTOH was less of an issue when the input was analogue tape because the noise level on tape was often above -90dBFS, so supplied its own dither noise. :)

These days even a cheap Scarlett 2i2 3rd gen USB ADC/DAC has a performance that the pre-CD digitial people could only have dreamed of! By the time you get to Benchmark and above, studios can have superb kit... and then may mess up anyway, alas! Human errors masqurading as guru-dom.
 
Digital sampling REQUIRES band-limiting to work properly! You can't put the blame onto something and disassociate it, when that thing is absolutely intrinsic and essential to correct digital sampling!

Band-limiting used to be done with an analogue filter in front of ADC.

These days typically the ADC samples at v. high rate and digitally downsamples to 44.1kHz, or hi-res digital sounds are downsampled to 44.1kHz.

But it doesn't matter how, or exactly when the band-limiting occurs, analoguely or digitally, it's absolutely part and parcel of digital sampling, including at 44.1kHz.

True, a DAC does not itself cause or add ringing, because band-limiting is what can cause ringing and when it does, that's already to the left of the DAC.

The problem with 44.1kHz is the band-limiting filter's transition band (the slopey down bit in the response approaching 22kHz) is (a) necessarily very sharp and (b) at frequencies where energy is often found in original waveforms (88/96kHz is much better on BOTH points which is why it's measurably much better in some situations).

Having written all that, I must point out, no-one has provided hard evidence 44.1kHz ringing is audible.
 
Last edited:
Having written all that, I must point out, no-one has provided hard evidence 44.1kHz ringing is audible.

Indeed - one might expect hi-res audio (96KHz, 192KHz or higher) to be more obviously better if this were a problem. I must admit, I can't reliably tell the difference between red-book CD & hi-res. Maybe the effect of a brick-wall filter at 22kHz is not a serious problem because in practice there is so little energy above 10kHz or so. Maybe a bat (listening to recordings of other bats) would find it excruciating.
 
Having written all that, I must point out, no-one has provided hard evidence 44.1kHz ringing is audible.

FWIW I wrote this as part of the focus on MQA, but it is generally relevant I think

https://www.audiomisc.co.uk/MQA/OnImpulse/RingingInArrears.html

The point being that most of the mics used over the decades for recording music have a 'ring-and-die' LP resonance at a frequency *below* 22kHz. Thus will cause dispersive 'ringing' that 'smears' the waveform anyway. Chances are this swamps what a decent 44k ADC/DAC/conversion might do.

And the (audio) point of having a time symmetric process (sinc like) pattern is to recover what went into the ADC. Up to those making recordings to decide what the feed in, and theirs is the responsibility as a result.
 
Just by way of a small, philosophical diversion: if crass mastering ruins recordings, why would we obsess over transparency and fidelity to the recording? If a euphonic, but coloured (ie, not entirely transparent) system renders badly-mastered discs as tolerable, isn't this preferable to not listening to them at all?
 
FWIW I wrote this as part of the focus on MQA, but it is generally relevant I think

https://www.audiomisc.co.uk/MQA/OnImpulse/RingingInArrears.html

The point being that most of the mics used over the decades for recording music have a 'ring-and-die' LP resonance at a frequency *below* 22kHz. Thus will cause dispersive 'ringing' that 'smears' the waveform anyway. Chances are this swamps what a decent 44k ADC/DAC/conversion might do.

And the (audio) point of having a time symmetric process (sinc like) pattern is to recover what went into the ADC. Up to those making recordings to decide what the feed in, and theirs is the responsibility as a result.
I get what you're saying about mics and most extant recordings Jim. I'm not refuting that.

Also true to say some modern mics are flat to 20kHz and probably beyond, and many sounds are produced digitally which never see a mic in the modern setting.

Band-limiting to 44.1kHz isn't an artistic choice, it's a necessity if you want to produce a 44.1kHz file. You could choose a linear or minimum phase filter, but band-limiting is not a choice.
 
Indeed - one might expect hi-res audio (96KHz, 192KHz or higher) to be more obviously better if this were a problem. I must admit, I can't reliably tell the difference between red-book CD & hi-res. Maybe the effect of a brick-wall filter at 22kHz is not a serious problem because in practice there is so little energy above 10kHz or so. Maybe a bat (listening to recordings of other bats) would find it excruciating.
Any ringing itself would be >20kHz so this seems a reasonable line of argument.

Counter argument, the reason we use reconstruction filter in the DAC is to filter out ultrasonic artifacts, which also should be inaudible anyway ... why, because they can cause distortion products in amps and especially tweeters, some could leak down into audible band. So it isn't a logical slam-dunk that ringing is inaudible, as long as we have amps and tweeters (or ears?) that distort. That said, it could still be inaudible in practice.

As for not much energy above 10kHz, I thought similar when looking at the spectrum of a track, but when I isolate single perceivable events e.g. cymbal crashes the spectrum looks quite different. We don't hear tracks all at once, but as a set of events.

Personally I perceive different to you, but I know of no hard evidence.
 
Last edited:
I’ve studied the fundamental theory of digital reproduction as taught and when you look for the why of its inadequacy you look to sampling rates or filter problems, the initial assumptions are that the problem would most obviously represent in the high frequency.

after much experience I’d suggest that the most obvious issues with digital are in the bass, even at 44.1 there are no shortage of samples to cleanly represent a 40hz signal, so why then does it end up being such a mush?
 
I’ve studied the fundamental theory of digital reproduction as taught and when you look for the why of its inadequacy you look to sampling rates or filter problems, the initial assumptions are that the problem would most obviously represent in the high frequency.

after much experience I’d suggest that the most obvious issues with digital are in the bass, even at 44.1 there are no shortage of samples to cleanly represent a 40hz signal, so why then does it end up being such a mush?
Power supplies and / or output stages.
 
Just by way of a small, philosophical diversion: if crass mastering ruins recordings, why would we obsess over transparency and fidelity to the recording? If a euphonic, but coloured (ie, not entirely transparent) system renders badly-mastered discs as tolerable, isn't this preferable to not listening to them at all?

I would say that you should pursue whatever sound or presentation you prefer and that makes your collection sound best to you.

If most of your record collection is poorly mastered then it might make sense to put together a system which masks some of the issues.
On the other hand different masterings have different problems, do you target all problems, only some? How do you go about it in an effective way?

And in my experience an accurate (to the signal) system does not make poorly mastered recordings intolerable, perhaps less exciting but not unlistenable. For some reason people associate accurate equipment or systems to "harsh" and "clinical" sound but if that is the case then those system have unpleasing or bad colourations – they are not accurate.
 
Last edited:
after much experience I’d suggest that the most obvious issues with digital are in the bass, even at 44.1 there are no shortage of samples to cleanly represent a 40hz signal, so why then does it end up being such a mush?

Tell us, what phase coherent, point source , full range speakers do you use to make such a judgement on shannon/nyquist David?
 


advertisement


Back
Top