advertisement


DACs -- Bit perfect +filters

Ok. Others who understand this better than I do have already answered better than I can , but I will have a go myself, because I think I may have an idea where this is confusing, and also I have an idea how to loop this back to your original question. Anyway, those who are better qualified than I am will no doubt cringe at the impecision and inaccuracy of what follows, but it is my best shot.


Ok- to answer your question, not exactly. The left hand side of the equation simply means "any continuous-time function" (e.g. here an "analogue" continuous voltage changing over time (like on an oscilloscope). Now the right hand side means...... will be equal to the sum (that's the big E -actually a capital sigma) of each of the time-spaced values of that function (i.e. the sample values or the voltages measured at each sampling instant) with the sinc function applied to that sample value.

So the answer to your question is that each sample is inserted into the sinc function on the right hand side one at a time in order to recreate the left hand side (ie x(t)) from a discrete set of sample values of x(t) (those samples are numbered 0 to K in the equation). and don;t forget that having fed each of the K samples into the function one at a time, the results are then added up (that's what the Sigma means) to get back to x(t).

So back to the equation each of those samples (i.e. values of x(t) at the sampling instant) will now generate a scaled sinc function; and each of those sinc functions is a continuous function in time -each one being scaled and time shifted. (the central lobe is at the time of the sample in question. This is as explained by @John Phillips the beauty is that as he points out each such sinc function has a max at its own sample time, but is zero at the other sample instants. So when you add them up they don't "interfere" at the sampling instants. But of course what we are really interested in is the way it enables us to calculate the values in between the sample times (i.e. between the dots) when each of the sinc functions from each of the sample values will contribute .

However this is a little bit confusing because whilst it is dealing with sampling (like digital music which is just a set of sample values representing a voltage/time relationship) it assumes that the sinc function is continuous i.e. "analogue" and actually as @Jim Audiomisc points out, that both the sinc function and the set of samples starts at the beginning of the universe and ends at the end of the universe.

In practice we can only have a limited time in which to take into account samples (like the mere 65 samples shown in the picture) and we are not going to calculate the whole sinc function for each sample. But also

last thing- the sinc function is in fact just another way of saying a perfect "brick wall filter" which lets through all the frequencies up to one point and completely cuts out all the frequencies above that point.*

No such analogue filter exists so instead we do the filtering mathematically ie by approximately calculating the sinc function . And we don't do it by calculating the whole sinc function for each sample- we use a digital filter which calculates the values at other sampling instants (i.e. we fill in more dots in figure 7.3). When we talk about filter taps we are actually referring to a digital filter - one which does not calculate the whole sinc function but only its value at further sample times (i.e. more dots) in between the time of the original samples we had. And it does so by calculating those values not from all the (infinite number of) sample values but from a number of values equal to the number of filter taps.

So recap at that stage we are not drawing the line in figure 7.3b we are just filling in more dots. Equally we don; t have to calculate the whole sinc function for each sample value because we don’t have to calculate all of its influence on every point in time, only at the times in the new sample instants (dots) which we are calculating.

Let’s assume for now that the filter is a sinc function but with 64 taps (it’s a time windowed sinc function) . Then what the filter does is to calculate the sample values as at the points (in time) between each existing sample: to do so for new sample, you add up the value at that time of the sinc function of each of the 32 samples before that time and the 32 samples after that time.
So each new sample value generated by the filter (a new sample value between the existing samples) is therefore the sum of 64 numbers (some positive some negative) generated from the preceding and succeeding 32 samples.
2 things to note - 1) we now have more sample values than before (like having 129 dots or so in fig 7.3b) - we have to, otherwise the digital interpolation filter can’t work. We have upsampled/oversampled.
2) we are still in the sampled time domain not the continuous time domain and at some point we need to apply an analogue filter to turn this into continuous time i.e. interpolate all the continuous values between the samples. The more we upsample/oversample first, the easier this is because the dots get closer.

*TBH this explains this all much better than I have https://lavryengineering.com/pdfs/lavry-sampling-theory.pdf
OMG adamdea, I was thinking about this last night in bed (as you do when you can't sleep) and your explanation just elaborates a bit on the conclusion I came to, and explains the summing aspect of the formula. Thank you very much. I can now visualise how it works (not very good at abstract concepts, I'm more a visualising guy).
Will have a read of the lavryengineering paper later.
There are quite a few obviously very able contributors to this thread so it's a bit daunting for an acolyte to expound a view which could likely be quite wrong.
 
OMG adamdea, I was thinking about this last night in bed (as you do when you can't sleep) and your explanation just elaborates a bit on the conclusion I came to, and explains the summing aspect of the formula. Thank you very much. I can now visualise how it works (not very good at abstract concepts, I'm more a visualising guy).
Will have a read of the lavryengineering paper later.
There are quite a few obviously very able contributors to this thread so it's a bit daunting for an acolyte to expound a view which could likely be quite wrong.
I cannot seee anythig wrong with trying to understand things. And by that I mean really try to understand things, which is a very different thing from scouring the world for bullshit to confirm one's prejudices, with a view to regurgitating the same. I wish there were more of it.

What completely baffles me is how few people seem to care. It is frequently painfully apparent that most people aren't really interested at all. That is of course fine, no reason why they should be. Where things get confusing for me at least is that people often appear to be interested in "scientific" or technical explanations, but seem not to care to consider whether they are true.

If really do care, then internet forums can be marvellous. You can meet people who really know what they are talking about and they can help (I don't count myself as one of those) or at least people who share your interest in learning (I do count myself as one of those). I consider it a complete privilege to have had the opportunity to learn stuff from some people on here (who I hope know who they are). Unfortunately these people will likely have been wearied by the almost constant interactions they will have with people who have no idea what they are talking about, but who want to argue. It therefore sometimes take a while to persuade them that it is worth taking the trouble to give considered replies. You seem to have done an excellent job though and had a constructive exchange. I am not being sarcastic, that is quite an achievement..
 
"Now the right hand side means...... will be equal to the sum (that's the big E -actually a capital sigma) of each of the time-spaced values of that function ...."

The calculus is all coming back to me now, or as Viv Stanshall said in Rawlinson End: "Those terrible memories came flooding back"!

I've managed to navigate my way through life for 70 odd years without needing to use it apart from a bit of integration involving my love life. o_O

This is a great thread, from butt jokes and The Bonzo Dog Band to some really elegant but unintuitive maths.
 
Will have a read of the lavryengineering paper later.
I think that what you are looking for is is there (ie the visual representation is there in pages 4-7). Understading the maths is not trival if you haven;t done it since A level (like me). I find it really helps to let go of the idea that this is intuitive- it isn't. It's maths and the only way you would instantly grasp it would be if you are really good at maths. And under 18.
There is so much stuff in that paper which is useful- the time domain/frequency domain interrelationship (p.11) the relationship between a sinc function (time domain) and a brickwall filter (frequency domain) (p.13). However it does depend IMHO on understading frequency and the concept that any function against time can be expressed in terms of a series or continuous spectrum of frequencies (essentially Fourier analysis). His way of trying to explain this at p. 13 on is quite visual though.

btw- one very telling point is that this wonderful paper was written specifically to debunk then-prevalent audiophile nonsense.
 
Last edited:
My understanding is that but perfect can only be achieved by running the signal through a NOS non-SDM DAC.

In this universe 'perfect' isn't on offer. 8-]

What the Nyquist summation using a sinc gives you is the method to get as close as you wish to 'perfect' by whatever means you fancy. All real world processes will have their limits.

You can use a digital 'DAC' *that does NO filtering at all *provided* you then run the result though an analogue arrangement whose time-response to an impulse ( e.g. only one non-zero sample in a sequences of zero samples) *is* a sinc shape.

Alas, analogue filters like that are a tad short in supply in reality.

However *digital* upsampling filters can give you *very* close approximations to a sinc filter response. With modern tech that's now fairly easy. How 'long' the filter must be when it includes things like feedback is moot. Robin Watts might want to go for "every sample in the Mahler Symphony. 8-] But I suspect modern well made DACs are at least as good as beating some of the dodgy things some recording engineer/gurus create. ;-)

Short version: if you want accuracy worry more about the people making the recordings, their mics, and your louspeakers and room. 8-]
 
Any digital filtering introduces mathematical errors. Computer based upsamplers like HQ Player and SoX VHQ sometimes do a better job than onboard. The debatable part is whether it's audible.

NOS DACs can't have errors from numeric processing but it's a different kind of error.
 
Last edited:
Any digital filltering introduces mathematical errors. Computer based upsamplers like HQ Player and SoX VHQ sometimes do a better job than onboard. The debatable part is whether it's audible.

NOS DACs can't have errors from numeric processing but it's a different kind of error.
Audibility will always be a thorny issue as there is no standard human being with standard hearing other than perhaps derived from an inner quartile of people tested, assuming anyone is prepared to stump up the costs of testing a wide range of ages and comparing their results with audiograms of individuals hearing. All one can do is be cynical about some claims but that can be based on our own experience as a standard.

How can someone with “standard” hearing assert what someone with, say, reverse slope hearing detect?
 
NOS DACs can't have errors from numeric processing but it's a different kind of error.
This is true, in a sense, but the chances of a reader who does not already know the answer identifying the extent to which it is true and not inferring something incorrect is pretty slim IMHO.
(incidentally and not wishing to be pedantic you may recall that the MDAC has a number of digital filters which aim to emulate a non-filtered sample and hold dac. Try doing that the other way round.)
 
Yes the NOS errors are gross and I wouldn't use a NOS DAC for that reason.

OTOH I'm one of "those people" who use computer upsampling, and hear differences in DACs ...
 
My understanding is that but perfect can only be achieved by running the signal through a NOS non-SDM DAC.
You can't get to perfect reconstruction. Only good enough and that's dependent on your ears and your objectives. That can be achieved in more ways than just one.

The first reason is perfect reconstruction generally cannot be achieved because the mathematics involved requires adding up a sinc function per sample and the sinc function has values from time = -infinity to time = +infinity. Perfection is not achievable, only good enough.

Then if you restrict the audio file you are reconstructing in time and ignore imperfections before and after then that's better. It may be possible that something like PGGB can do the numbers of taps but since sinc produces irrational numbers with an infinite number of digits you still have to settle for good enough.

If searching for the closest to perfect is your hobby then fair enough but maybe others won't have the same standards as you.

Years ago when I learned the basics of digital audio for a new job I did some simulations of reconstruction.

In the frequency domain you can define perfect. Taking a finite number of filter taps and a finite degree of arithmetic precision (bits calculated) you can subtract the high-precision Fourier Transform of the finite filter from the perfect filter and see exactly how imperfect the real one is. I found that you really didn't need anywhere near Chord-level numbers of taps to get a frequency-domain difference that I think many people would describe as inaudible or too small to matter. YMMV of course.

And the interesting thing is that in the amplitude domain the precision of the arithmetic adds small amounts of non-linear distortion for every finite calculation. They add up. More calculations for more taps is not always a good thing in real life implementations unless you go better in arithmetic precision.

Actually the technology is so much better today than I had to play with, so having more taps and greater bit depth is not the problem it was then. So it's not as big a deal to go large if you want the best possible. But it's still not possible to get to perfect. Only good enough for your particular standards.
 
Yes the NOS errors are gross and I wouldn't use a NOS DAC for that reason.

OTOH I'm one of "those people" who use computer upsampling, and hear differences in DACs ...
Not meaning to have a pop. I have used computer upsampling at times and I may do in the future. It is fair to say that it offers the opportuntity to produce filters with a much higher degree of precision than most hardware (ie internal dac solutions). [Also to bodge it up too.] Different issues are involved in
a) perfectionism through an attempt to achieve a technically justifiable aim to a degree of precision which is extreme
b) doing something in a way which does not make any technical sense

It is I think very difficult to make any technical case for NOS or to make any non-misleading or accurate statement beyond

"Some people think NOS dacs sound better."

It is also difficult to discuss NOS dacs without being very careful because they may mean different things to different people. I mean a dac which does not oversample/have a digital filter and which does not apply any significant filtering beyond the effect of the sample and hold reconstruction (and also therefore does not use delta sigma modulators etc).

There are also I seem to remember some "high end" multi bit dacs, built to exquisite precision so that they can approach the accuracy of a decent $200 dac at only $20k, which are a different kettle of fish.
 
You can't get to perfect reconstruction. Only good enough and that's dependent on your ears and your objectives. That can be achieved in more ways than just one.

The first reason is perfect reconstruction generally cannot be achieved because the mathematics involved requires adding up a sinc function per sample and the sinc function has values from time = -infinity to time = +infinity. Perfection is not achievable, only good enough.

Perhaps worth pointing out that items like LPs and tapes are also 'quantised' because that's the way reality works. Atoms, molecules, crystals, etc. This has even lead to things like ye olde 'fuzzy distortion' argument about analogue kit. :Generally from engineers who don't really get QM. : -) But, yes, *electrons* are 'quantised'. As, indeed an phonons in air. That's the Universe for you... 8-] Hint - don't worry, just enjoy the music.
 
Personally I stopped worrying about DACs and ADCs some time ago. Admittedly I now use Benchmark examples for 'serious' work. But even old CA DACs or the Scarlett 2i2 3rd Gen seem OK to me for general uses. In general, when I hear a problem its in what is being played or recorded.
 
Hope you don't mind a small deviation from the main topic, but can I ask what are the typical problems associated with modern recordings/mastering beyond the usual loudness wars stuff. I've been playing around with HQPlayer which has an apodizing counter, the idea being if the counter starts increase a fair bit, then to you should use an apodizing filter for playback. The manual states the counter detects possible errors in the ADC/mastering but nothing more than that.
 
The problems at the recording or remastering tend to be due to slider-wagglers who hae nae clue about what all those sliders and effects do. Plus they may well level compress, clip, etc, because they think it will 'sell'. Well made LPs, CDs, etc, can sound superb. e.g. 'Chasing The Dragon' direct cut LPs I have are excellent. Its the nut behind the wheel....

You'd have the explain the "apodising counter" comments. Not clear to me what in practice is happening in HQPlayer as I've never used it. Can only *guess* it may refer to intersample overs. But dunno.
 
Thanks, yeah I guess Jussi (HQ author) would be the best person for info on the counter. I have some tracks by the Cocteau Twins, original CD release is loud but does not trip the counter, the remastered tracks (not as loud) counter goes crazy. I'll do some offline upsampling to see what SoX reports.
 
Hope you don't mind a small deviation from the main topic, but can I ask what are the typical problems associated with modern recordings/mastering beyond the usual loudness wars stuff. I've been playing around with HQPlayer which has an apodizing counter, the idea being if the counter starts increase a fair bit, then to you should use an apodizing filter for playback. The manual states the counter detects possible errors in the ADC/mastering but nothing more than that.
Here's a simple one: microphones.

You can have extended bandwidth, or low-noise; pick one.
To get the noise down to about 18bits of resolution at former 'redbook standard', the microphone will have a bandwidth of about 15KHz max. And these limits are pretty-much physical; nothing to do with the analogue electronics side!

Using extended bit-depth in the mastering chain makes sense then for obvious reasons - so much more space to play, noise-free - but there is a hard limit, on what we can record in the first place. Bear that in mind next time you get told you 'need' the forthcoming 32bit, >1million tap reviewer's delight.
 
Last edited:
I have some tracks by the Cocteau Twins, original CD release is loud but does not trip the counter, the remastered tracks (not as loud) counter goes crazy. I'll do some offline upsampling to see what SoX reports.

The Cocteau Twins remasters are notoriously bad. So bad they sound like different music to me! The ‘80s CDs are fine, the original vinyl a very good step up from there.
 


advertisement


Back
Top