advertisement


chord 1,000,000 taps

PCM digital audio is governed by the sampling theorem (https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem), and that latter formally proves that the reconstruction of the sampled signal will be 'perfect' in the band below Fs/2 when the Sinc(x) function (or sin(x)/x) is used as the reconstruction (low-pass, anti-imaging) filter: https://en.wikipedia.org/wiki/Sinc_function

The sinc function is a single sinusoid peak with wiggles to both sides of it in time, damping out the farther removed one is from the center peak. Sinc stretches back from the Big Bang to Armageddon.

In digital (low-pass) fitlering, using oversampling, Sinc is approximated by storing a number of the function's values along the time line. Each such value can be named a 'tap'. (Not exactly so, but you get the idea.)
Thanks for taking the time to explain it, Werner. It's all Greek to me, though o_O:eek::confused:, so I think I'll stick to enjoying the music. :D
 
Is there some technical conformation you can point to with regard to diminishing returns and the point that starts to occur?

Wholly agree it’s a marketing gimmick at this point but equally I don’t see the evidence for the above assertion.
There are three examples of DAC reconstruction filters, designed in a Matlab filter design application using 148 taps, 395 taps and 1001 taps, in this Audio Science Review post.

For those who know how to read the filter response graphs, the aim is a flat enough pass-band, enough attenuation in the stop band and a small enough transition band in between.

The first (148 taps) seems to me to be just about OK. The second (395 taps) seems to be very good and the third (1001 taps) seems to be well into over-design unless someone enjoys having a DAC that is state-of-the-art (which is absolutely fair enough IMHO).
 
Mans' post in the above link also nicely shows the functional structure of the filter, and the origin of the term 'tap'.

img593.png


Each Z^-1 stage is a memory element, delaying the sample one clock cycle. The b0, b1, ... are the filter coefficients (ideally forming the Sinc function when laid out along the time axis), to be multiplied with the delayed sample values. The (+) stages then sum all of these values in order to generate one output sample. And so it goes on.
 
If you fancy playing with a variety of filters, including million tap ones, as well as different dithering algorithms, then you can download HQPlayer and mess around to your hearts content. The trial is free, but you are limited to 30 minutes of playback at a time. The chap who wrote it, Jussi Laako, can found on various forums discussing the merits of Chord’s approach.

https://www.signalyst.com/consumer.html
 
I’ve no idea of the validity of the tests but I think all comparisons of HQPlayer and M-Scaler I’ve seen have fallen on the side of the Chord. Wonder what Andy or others think of this.
 
Back in 2008, on sabbatical, I played some time with oversampling and downsampling filter designs, thinking there might be a market for ultra-high-accuracy convertors. I went into the low millions of taps (IIRC), but in the end it was not worth it. At any rate, products like iZotope saw the light of day and that was it. I went back to my normal business, which tends to have a bit more (positive) impact on mankind than trying to fit millions of angels on the top of a tap.


Despite your attempt at the explanation of taps it still remains beyond my brain to figure out what they actually do, more so I've always wondered how you get a million taps setup within the hardware? Sounds like a very long drawn out process to achieve but like most things I'm sure there must be a (very) quick way to achieve this?
 
There are three examples of DAC reconstruction filters, designed in a Matlab filter design application using 148 taps, 395 taps and 1001 taps, in this Audio Science Review post.

For those who know how to read the filter response graphs, the aim is a flat enough pass-band, enough attenuation in the stop band and a small enough transition band in between.

The first (148 taps) seems to me to be just about OK. The second (395 taps) seems to be very good and the third (1001 taps) seems to be well into over-design unless someone enjoys having a DAC that is state-of-the-art (which is absolutely fair enough IMHO).

Afraid you lost me by taking ASR seriously.
 
What’s unserious regarding the science of measurement?
nothing at all, as long as tests are done properly. The ASR review of the m-scaler (Chord’s 1,000,000 tap upscaler) wasn’t carried out with an optimal DAC and the reviewer, iirc, stated that he didn’t believe that upscaling as in the m scaler would make any difference to sound quality. That’s fair enough to hold an opinion but when carrying out a review, surely it must be approached with an open mind to be taken seriously or considered objective?

There is a lot of good stuff on ASR but it does tend to be drowned out by the premeditated virulent opinions. With all audio the thorny questions of what is audible come into play and given how we, the end users, are different it is difficult to be too dogmatic and taken seriously. FWIW Rob Watts, Chord’s DAC designer, admits that he was surprised that increasing tap length improved audible sound quality for him. It is not all surprising that many people can’t hear any difference with the m scaler in play but that doesn’t mean that others don’t get any benefit and not just due to cognitive bias or placebo effect.

What I have found is that for years I found niggly irritations with digital playback which led me to trying out various DACs. Since going down the Chord route with m scaler into TT2 and then into Chord Ultima amp I haven’t felt any need to change anything and simply settle back to enjoy the jolly old music without distraction from what the hifi isn’t doing quite right. It works for me but just as likely won’t for some else. Until we really get into serious research on how our mind interprets sound stimuli and the variation across a wide spectrum of listeners these debates on audibility will forever rumble on.
 
Consider what 1 million taps mean on a CD file at 44100 samples per second.
Around 20 second at one or other end is going to give invalid results as there is no data in the pipeline.
I am too old at 64 to hear the subtle differences between various competent DACs and Rob Watts must be a similar age
 
Consider what 1 million taps mean on a CD file at 44100 samples per second.
Around 20 second at one or other end is going to give invalid results as there is no data in the pipeline.
I am too old at 64 to hear the subtle differences between various competent DACs and Rob Watts must be a similar age
I’m not sure that I follow your logic there. Are you saying that because you can’t perceive a difference at 64 then it follows that Rob Watts also can’t?
 
I'm saying that in the early 80s when digital audio was crude and my ears were young, I could hear digital artifacts from the boards we developed very easily.
These days when the technical performance is FAR better and my ears are older, I find what I still notice are subtle level differences.

The first point was that long tap lengths cause errors towards the ends of a file. These could be heard as differences
 
For most music 44.1kHz files already have some inaccuracies baked in, due to the steep filter needed for down-sampling. Maximum accuracy at the playback end, is accuracy to a 44.1kHz signal - arguably not the same thing as accuracy to the original analogue.

IMO in the absence of the live analogue feed, hi-res is technically the most accurate thing available. So it's more valid to compare hi-res to 44.1kHz, and do whatever it takes to make 44.1kHz sound more hi-res-y, whether that action is accurate or not.

So far with 44.1kHz, I prefer to use an intermediate phase filter, also a gradual roll off from 19kHz or smidge under. This makes it the most hi-res-y for me, OVERALL.

I've perceived the M Scaler to improve sound stage, which I found annoying as a possible vindication of Chord's approach of linear phase and brick wall filter, which is different to mine in the last paragraph. But this is covered by my "overall" qualification - I just offer my impressions, however incompatible they seem.
 
Last edited:
Afraid you lost me by taking ASR seriously.

Quite. I know what you mean.

But these demonstrations were made by Mans Rullgard. Mans is a digital signal expert, with a lot of hands-on experience in quality audio. That this was done on ASR is almost a coincidence. If could have been any forum that he is frequenting.

You'll probably find other interesting things on his site https://troll-audio.com/author/mans/.


Not everyone on ASR has the same agenda, or ironic (lack of) knowledge.
 
What’s unserious regarding the science of measurement?

In this specific instance it is the suggestion that said measurements tell you something about a product that your ears cannot. That would be aided and abetted by the intractable manner of the owner of the site. Managed to take a theoretically decent idea and turn it into an ideology/comedy.

Hope that clarifies it.
 
This wasn't a debate on sound quality but a question on the technicality of Chord's approach to filtering. From what i can gather from the answers, they are trying to reconstruct a brick wall filter. Whether or not better filter means a better (or more accurate) sound is another question altogether.

Yes. A brick wall filter is theoretically optimal. It is also not technically difficult to implement, and early CD players did employ them to comply with what we understand mathematically about sampling theory.

However, as the measurement of what was being done within the pass-band became more sensitive, manufacturers and testers began to realise that the pass-band effects of genuine brick wall filters were harmful. The 'sharper' the filter the more of a 'ripple' effect it can cause in the pass band.

Many manufacturers deploy a wide range of techniques to try and emulate the theoretical benefits of a brick wall without causing undue harm in the pass band. Chord's technique is one of many. Other manufacturers, who produce equipment that is both technically and subjectively very accomplished, apply very different techniques and often use much gentler filters.

I own both a Chord Mojo and a Chord Hugo and I find both to be very impressive pieces of kit. I have also heard the Hugo 2 and the Mojo 2 which are excellent.

The criticism that emerges of them, from those who do not enjoy them, is that they are highly analytical sounding. I think that is a very fair criticism but, as it happens, I like to analyse the music I am listening to. I also find the Hugo in particular to be very viscerally involving.

Chord produce some outstanding kit and is a good British success story.
 
As the function values of Sinc quickly approach zero when moving a way from the peak, original samples that far from the peak will ever contribute less to the reconstructed output signal. The original Philips 4 x oversampling filter in the early 80s had a few tens of taps, perhaps slightly over 100 (don't remember). Later over/upsampling filters went to a few 100, at which time the summed far-out samples' contributions would fall below the quantisation noise inherent to the DAC. As this noise is physically limited to 20-22 bits equivalent at best you have a clear limit here.

Just to be pedantic, actually the sinc function doesn't quickly approach zero. Since it is defined as sin(x)/x, it reduces linearly with samples. However, what is usually used is a windowed sinc function, rather than the straight sinc, and yes, this does quickly approach zero by design. The commonly used window function is the raised cosine function. Wikipedia will explain more accurately than i can, as i'm not an EE, i'm a DSP engineer so it all happens after I do my stuff, well, except for sample rate conversion, but that's rare in my line of work.

https://en.wikipedia.org/wiki/Raised-cosine_filter
 
Well, no. The million taps live in the post-upsampling domain, 705.6kHz for a CD input signal, if Chord are to be believed.
That means the entire filter is 'only' 1.4 seconds long.

There is no reason not to believe them. This is trivial stuff to implement. If they wanted to use 10 million taps they could, it would just take 10x the CPU power to implement.

To be honest, i'd be more interested to know what resolution they did these calculations at. Assuming 24 bit data, and 24 bit filter coefficients they'd need 64 bits to accumulate to for their million taps (assuming it's actually 2^20). This might be why they stopped there.

Does it matter? Is this too many taps for the reconstruction filter? I'll leave that to other people with better ears to argue about, but the engineering is sound which makes a change from lots of the other weird foo we generally have to deal with.
 


advertisement


Back
Top