advertisement


Upsampling

Alex S

carbon based lifeform
Interest in yet another dac, the Ferrum Wandla, got me thinking - my dac acquisition has to stop! So, given a major feature of the Wandla is the HQ filters, I thought I'd try a bit of upsampling with what I have: Audirvana. I did this to see, a) can I hear any difference and b) should I trial HQPlayer with a view to purchasing it?

So, after a bit of faffing around I started upsampling to the max with Audirvana. Generally, that's PCM 32/768kHz but less for dacs that can't take it. Dacs are Audial S4 (TDA 1541), MHDT Pagoda (PCM 1704 x2), Eversolo A6 (ES 9038 x2) and Topping D70 Pro Sabre (ES 9039). I don't want to nerd around too much, and I know it's not just the chip choice and DSD is the thing, but I'm impressed. I like what I hear with all the dacs, contrary to expectations, and the Topping is great with its Pre90, an amazing combination for the money.
 
Ideally you sould be upsampling into NOS DACs.
Different DACs have different optimal settings (32/768 may not be it).
Sabre DACs will oversample futher.

You could try HQPlayer, it's free for 30 minutes periods (perpetual) then you need to restart. I am happy with the default filters and modulators but there's plenty to play with if you feel like it. I find solo acoustic guitar and solo harpsichord useful for testing filters.
You've just missed the Black Friday / Cyber Monday deals...
 
nterest in yet another dac, the Ferrum Wandla,

It does look interesting doesn’t it - and with the ability for users to vote their favourite (and least favourite) filters.

I tried the NOS DAC (Holo Spring) with HQP vs Hugo TT a few years back. It was a leisurely one and a half year foray where I’d put one in for a while and then use the other. In the end I left the Hugo in for longer than the Holo and found the countless choices in HQP daunting as everytime something did not sound exactly like i wanted it to I’d consider changing filters.

However things were far from night and day and this DAC appeals with its limited HQP filters that are optimised for the particular DAC.

.sjb
 
I just "don't get" up sampling DACs from a fundamental viewpoint.

Presumably (and in accordance with Nyquest theory) you can't actually recover information that wasn't captured by the original sample.

You presumably also must be applying an algorithm which could perfectly well be applied by any computing device (though not necessarily in real time) and the output could be fed to any DAC capable of operating at the up sampled bitrate.

I just can't see why anybody would think it worthwhile to pay a lot of extra money for a box that could do it in real time if in fact a decent PC couldn't (I bet one with a suitably programmed modern GPU could do way better than Chord's best efforts).

But why bother when a suitably filtered DAC at the original sample rate should reproduce the signal as perfectly as anybody could require?

It makes about as much sense to me as cable lifters, earthing boxes, cotton reels hanging from strings to alter room diffraction or any of the Peter Belt crap.

I'll keep the cash in my pocket and stick with the Topping until they make a sufficiently better Topping to make an upgrade worthwhile!

It might sound different but bear in mind that isn't necessarily "better" or "closer to the truth".
 
This week I’m going more for the high than the fidelity.

I have Audirvana and a MacBook so it was free for me to try. The last thing I want to be doing though is messing about with filters and sample rates before playing some music. As John says, a few filters that work would be attractive (I can hear no real difference in any ES filter).

I can see the logic in cotton reels hanging from string so I’ll try that next.
 
I just "don't get" up sampling DACs from a fundamental viewpoint.

Presumably (and in accordance with Nyquest theory) you can't actually recover information that wasn't captured by the original sample.

You presumably also must be applying an algorithm which could perfectly well be applied by any computing device (though not necessarily in real time) and the output could be fed to any DAC capable of operating at the up sampled bitrate.

I just can't see why anybody would think it worthwhile to pay a lot of extra money for a box that could do it in real time if in fact a decent PC couldn't (I bet one with a suitably programmed modern GPU could do way better than Chord's best efforts).

But why bother when a suitably filtered DAC at the original sample rate should reproduce the signal as perfectly as anybody could require?

It makes about as much sense to me as cable lifters, earthing boxes, cotton reels hanging from strings to alter room diffraction or any of the Peter Belt crap.

I'll keep the cash in my pocket and stick with the Topping until they make a sufficiently better Topping to make an upgrade worthwhile!

It might sound different but bear in mind that isn't necessarily "better" or "closer to the truth".

DAC performance is measurably better when using external upsampling (at least with HQPlayer).

There's no such thing as "closer to the truth" in audio, only a more accurate reproduction of the recorded signal (which is all you have to play with).
 
This week I’m going more for the high than the fidelity.

I have Audirvana and a MacBook so it was free for me to try. The last thing I want to be doing though is messing about with filters and sample rates before playing some music. As John says, a few filters that work would be attractive (I can hear no real difference in any ES filter).

I can see the logic in cotton reels hanging from string so I’ll try that next.

Many people complain about finding "the countless choices in HQP daunting" but the default settings are audibly perfectly fine and chosen by most users as well as its developer.
Also the manual has a brief explanation of each filter and its use.

Outboard upsampling has the advantage of much huger processing power leading to better performing DSP, removing a noise source from inside the DAC and optimising the bit depth and sample rate to a particular D/A chip or discrete stage.
 
I just "don't get" up sampling DACs from a fundamental viewpoint.

Presumably (and in accordance with Nyquest theory) you can't actually recover information that wasn't captured by the original sample.

You presumably also must be applying an algorithm which could perfectly well be applied by any computing device (though not necessarily in real time) and the output could be fed to any DAC capable of operating at the up sampled bitrate.

I just can't see why anybody would think it worthwhile to pay a lot of extra money for a box that could do it in real time if in fact a decent PC couldn't (I bet one with a suitably programmed modern GPU could do way better than Chord's best efforts).

But why bother when a suitably filtered DAC at the original sample rate should reproduce the signal as perfectly as anybody could require?

It makes about as much sense to me as cable lifters, earthing boxes, cotton reels hanging from strings to alter room diffraction or any of the Peter Belt crap.

I'll keep the cash in my pocket and stick with the Topping until they make a sufficiently better Topping to make an upgrade worthwhile!

It might sound different but bear in mind that isn't necessarily "better" or "closer to the truth".
The only way it makes sense is if the computation requirement to calculate the required filters were expensive enough to implement that the manufacturers shied away from doing 'the right thing' to keep costs down.

For example, the first bitstream converter in 1988 had a 4x FIR filter, a 32x linear interpolator and then a 2x S&H filter, and operated on 17 bit values. This was expensive and difficult to implement back then. Today, implementing the same algorithm would be nonsense as you could do all of this with 32 bit precision and roll it all into the FIR stage.

Saying that, that bitstream converter in 1988 exceeded the resolution of CD source material, so it wasn't a compromise, but carefully engineered to work with the available tech. I'd say this was the point when DACs were solved for home music reproduction, and everything that has come since has basically fiddled with a winning formula to little realistic benefit other than bragging rights.

If we want to talk about real benefits that happened after this period, then jitter reduction is a real thing, and that *has* made a significant difference. Notice though that you could async reclock a feed and stick it through that early DAC and get all of the benefits, it's not the filtering that has improved, but other aspects of the chain. This was solved in the mid 90s.
 
There's no such thing as "closer to the truth" in audio, only a more accurate reproduction of the recorded signal (which is all you have to play with).

However when examining a 'DAC process' which gets fed with a series of samples and outputs analogue waveforms there is a "closer to the truth" defined by Nyquist. This is that the output analog waveform is the one uniquely defined by the series of samples you started from.

Resampling should just 're-package' the same info from one series of sampling to another. That may - or may not - allow a following DAC process to output a closer representation of the source sample sequence for its analogue output.
 
Something worth mentioning is the possibility of getting intersample overs with upconversion, which requires that the output be attenuated by at least 3dB.

To be entirely 'safe' you'd need more like 6dB. :)

cf https://www.audiomisc.co.uk/HFN/OverTheTop/OTT.html

I now use the 'Waveform From Hell" as a test waveform for devices. Does sometimes show up differences which reviews seem to be unware exist. The puzzle for me is why so few reviews, etc, check this when it is easy enough to do.
'Not Invented Here' may be a reason for it being awol. Dunno. But then I don't have a very high regard for many reviewers...
 
However when examining a 'DAC process' which gets fed with a series of samples and outputs analogue waveforms there is a "closer to the truth" defined by Nyquist. This is that the output analog waveform is the one uniquely defined by the series of samples you started from.

Resampling should just 're-package' the same info from one series of sampling to another. That may - or may not - allow a following DAC process to output a closer representation of the source sample sequence for its analogue output.

There's no perfect(ly accurate) filter in practice, only in theory.
 
To be entirely 'safe' you'd need more like 6dB. :)

cf https://www.audiomisc.co.uk/HFN/OverTheTop/OTT.html

I now use the 'Waveform From Hell" as a test waveform for devices. Does sometimes show up differences which reviews seem to be unware exist. The puzzle for me is why so few reviews, etc, check this when it is easy enough to do.
'Not Invented Here' may be a reason for it being awol. Dunno. But then I don't have a very high regard for many reviewers...

Yes, I use 6dB.
HQPlayer will warn you if the track needs more attenuation than what's been selected.

On a side note, HQPlayer will also let you know if the track will benefit from the use of an apodizing filter.
 
While we have the experts here.

What is the difference between ‘upsampling’ and ‘oversampling’

Why is it what more than 30 years ago we had 16/32/64 times oversampling, but that upsampling as it’s termed is only fairly recent.

What possible disadvantages could there be to ‘oversampling’?

Why does ‘oversampling’ not simply solve the 20khz filter problem.

Thanks
 
There's no perfect(ly accurate) filter in practice, only in theory.

In theory you're wrong, but in practice they do seem to be black swans. However given that no ADC or other component is 'perfect' either, I don't personally loose too much sleep over that. Particularly given the imperfections of microphones in the first place, and speakers+rooms. Those, and how something gets 'mixed' by studio blokes seem to matter vastly more in most cases.
 
I have a NOS DAC, I prefer to upsample everything although occasionally I (randomly) don't. HQ Player, Audirvana, MacOS native upsampling via AudioMidi...it all sounds good to me. But as "the theory" also influences what you hear, I try to ignore it these days (not as if I ever did properly understand Nyquist and all that).
 
While we have the experts here.

What is the difference between ‘upsampling’ and ‘oversampling’

Why is it what more than 30 years ago we had 16/32/64 times oversampling, but that upsampling as it’s termed is only fairly recent.

What possible disadvantages could there be to ‘oversampling’?

Why does ‘oversampling’ not simply solve the 20khz filter problem.

Thanks

And oversampling process may simply add its own imperfections. Thus isn't always any kind of cure-all. May simply be a waste of time and money and you'd be better spending more on, say, your speakers.

Correctly sampled, say, 48k material means there is NO INFORMATION left in the series of samples which tells you about any original components above 24kHz. Ideal reprocessing the 48k series up to a higher rate can't recover what isn't in the 48k steries. Not can it 'know' what filtering was employed during the process of creating that 48k series. You can decide to 'guess' and tell it to fiddle about on that basis. But you'd then need to 'know' something that you have to guess/deduce/etc. (Ahem!) This is what 'tone controls' are for. 8->
 
Jim if you oversample to say 2x does that mean your filter now can be up at 44khz rather than 22?
 
Jim if you oversample to say 2x does that mean your filter now can be up at 44khz rather than 22?

Yes. But that may not alter the actual result because you have not changed the waveform defined by the series of samples. What you MAY have done is make it slightly easier to get the 'correct' overall reconstruction filter shape which was needed for getting what the samples define. Devil in the details.

One advantage of high upsampling ratios is that the output can now have fewer bits per sample. Which may let the process deliver less added distortion. cf Robin Watts, etc.

BTW if anyone wants to play with "The Waveform From Hell" there is file of a short burst here
http://jcgl.orpheusweb.co.uk/temp/WFH-01dB44k.flac
NOTE: use with care. It should give 'overs' to about +5dBFS is correctly rendered. That's a lot.

e.g. One possible problem with studio-generated rock/pop is intersample overs. Generating an oversampled/upsampled version can show these up. In some cases it can then reconstruct them if you downshifted the input into a series of samples with more bits per sample. Otherwise you rely on the DACs reconstruction filter.

Beats me why reviewers tend to ignore this, but there you go.

BTW The humble Scarett 2i2 3rd Gen copes quite well with overs. So I'd expect any decent audio DAC to also cope. But, alas, what we expect isn't always what we get sold...
 


advertisement


Back
Top