advertisement


What is ASIO & Kernel streaming on XP?

Linds

pfm Member
Please advise what needs to be clicked, banged, punched and tapped upon to implement ASIO and/or Kernel streaming on an XP PC.

Thanks.
 
Cool, thanks Michael. So, if I'm using Spotify on an XP PC, via a USB DAC, what do I need to do to 'maximise' sound quality? I think I'm already bypassing kmix but as with all these things (since the demise of the ZX Spectrum) you need a degree in Computer Science, even if you use the damn things every day.
 
what is this site?

I've just read a load of twaddle about the squeezebox. The artical was full of irrelevant specualtion and bad spulling.

I had a really good laugh at this bit from the computer audio vs. CD audio page:

"If you like detailed sound that is devoid of colouration and interpretation, then it's easy to be satisfied with a computer system because accuracy is pretty easy for a computer. Delivering bit-perfect data is its main task after all."

What absolute drivel!

There are postings all over the Spotify forum asking for the ability to output to Kernel Streaming or ASIO. As it stands it is locked to DirectSound output (so even if you're using other software which bypasses kmix, then Spotify won't be) meaning you'd need some hijacking program like the one mentioned in the link above to get the Spotify stream unadulterated into your soundcard/DAC. ASIO/KS capability has been promised in some future version, but this was well over a year ago now so I wouldn't hold my breath.
 
I am using the Fidelity Beta add-on for Spotify with an EMU 1820M and Reaper. It sounds great and works flawlessly at 5.7 ms latency, but will freeze patchmix if you play with Patchmix too much. I am using VST plugs in Reaper for KRK monitor output, and High and low pass filters within Patchmix for subwoofer output to a KRK sub.

They are working on a re-program with C++ to solve issues with .net. I never could get the add-on to work until I switched over to my EMU 1820M.

Watch this programmer closely. I think he has something big here.



http://www.fidelify.net/
 
How about with Win7, is it worthwhile to use any special add-ons?

I must ask what the point of bit-perfect playback is when the source is a lossy codec?

I already make sure the device is set to 44.1KHz in Windows Sound settings. ..although I wonder if the streamed content is 48KHz, even though my local files aren't?
 
What the Win 7 mixer does is still somewhat shrouded in mystery, but as far as I can gather, it is bit-perfect unless it needs to perform a sample rate conversion in order to - well - mix audio from more than one source. (And you get a choice of sample rates and bit depths via control panel for when this is necessary.) I set it to 96kHz, 24 bit for my Audioengine D1.

Now, w.r.t. the dreaded XP KMixer, I still have a machine upstairs running this. I can bypass the KMixer using using ASIO4ALL, but Foobar and the D1 don't seem to get on, so I tend to just use Winamp via the default DS output. And it sounds absolutely fine. Keep all the software volume controls at 100 % and be happy. Most of these bit perfect plugins cause more trouble than they're worth, many have stability issues, and don't really make any difference.
 
You know it occurs to me that bit-perfect is actually a bad thing. As we know, a lot of music is recorded with peaks reaching 0dB FS and this causes great distortion in over-sampled DACs. So, any recordings that hit 0dB on a regular basis will sound significantly better if the output level is reduced by about 3dB - 6dB in the digital domain before it reaches the DAC. No longer bit-perfect but much less distortion!
 
You know it occurs to me that bit-perfect is actually a bad thing. As we know, a lot of music is recorded with peaks reaching 0dB FS and this causes great distortion in over-sampled DACs. So, any recordings that hit 0dB on a regular basis will sound significantly better if the output level is reduced by about 3dB - 6dB in the digital domain before it reaches the DAC. No longer bit-perfect but much less distortion!

I would be interested to know why OS DACs would have this particular problem.I do know that some CD players clip at 0dBFS, and I've rejected a number of ADC/DACs for the same thing, but don't know why it should affect OS DACs more than NOS ones.

I normalise all my own recordings at -1dBFS to avoid this problem anyway.

S.
 
Most digital signal processing requires maths far beyond my ability to fully understand, but my basic understanding is that the process of oversampling requires some headroom to work, just like sample rate conversion the waveform changes shape and can create even higher peaks.

I'd advise at least -3dB normalization because the analog waveform can peak quite a bit higher than the maximum digital sample.
 
Most digital signal processing requires maths far beyond my ability to fully understand, but my basic understanding is that the process of oversampling requires some headroom to work, just like sample rate conversion the waveform changes shape and can create even higher peaks.

I'd advise at least -3dB normalization because the analog waveform can peak quite a bit higher than the maximum digital sample.

The maths does require the internal calculations to be more than 16 or 24 bit, which is why many DSPs operate at 32 or 48 bit or even floating-point internally, but then dither down to 16 0r 24 bit. It's this dithered signal that then gets converted to analogue, so it shouldn't matter at that point what's happened to it before the conversion. I accept that if the DSP is inadequate for the task, which may well have been the case 15 years ago, but since the availability of the Sharc series of processors, these should be fully capable.

The issue of the analogue voltage level exceeding the 0dBFS standard level is well known, and is due to having a finite bandwidth. Any competent DAC should allow some 3-6dB extra analogue headroom, but few do. Possibly as a result of so much being done with 3v logic, running converters off USB power and so on, that we get this problem. It's a real problem, but one that really shouldn't exist.

S.
 
The maths does require the internal calculations to be more than 16 or 24 bit, which is why many DSPs operate at 32 or 48 bit or even floating-point internally
S.

But doesn't any 16-bit signal still get dithered to a higher bit-depth with 0dB FS remaining at 0dB FS? There is thus no extra headroom in the digital domain, only higher resolution.

The problem, I think, stems from the fact that the highest level digital sample does not always represent the highest point of the analog waveform. When the signal is then processed, for example in up-sampling, new digital samples may need to be created that are higher, but since the digital data was already normalized to 0dB FS this can't happen.
 
But doesn't any 16-bit signal still get dithered to a higher bit-depth with 0dB FS remaining at 0dB FS? There is thus no extra headroom in the digital domain, only higher resolution.

The problem, I think, stems from the fact that the highest level digital sample does not always represent the highest point of the analog waveform. When the signal is then processed, for example in up-sampling, new digital samples may need to be created that are higher, but since the digital data was already normalized to 0dB FS this can't happen.

Yes, but the DSP can be ranged so that 0dBFS doesn't stay at 0dBFS as the FS part can change. That's what floating point maths does, if I've understood it correctly. Similarly, with fixed point maths, if the maths is done at 32 or 48 bit, then the whole 16 bit original can be ranged.

At least, that's my understanding of the way DSP works, but I too struggle with the maths, especially as different DSPs work in different ways.

S.
 
I can't see that working in a real-time application like a DAC, as it would cause a level shift at the output after conversion, unless the analog circuit compensated for the digital scaling.

This video explains the problem better than I can - http://www.youtube.com/watch?v=BhA7Vy3OPbc

It doesn't need to because the dithering back to 16 or 24 bit before the analogue conversion restores levels.

As to it working in real-time applications, it does in all professional digital mixers. It even works with pretty low latency so as not to upset AV lip-sync.

S.
 
The distortion from inter-sample overs is generally a non-issue.

If the recording was heavily limited or even shredded to begin with, then the additional distortion from the inter-sample over will not be noticed in the forest of that limiting.

If the recording reaches only occasionally 0dBFS in an illegal way then the amount of inter-sample distortion will be tiny (the delta amplitude), instantaneous (a few samples at most), occasional (only the recording's very highest peaks), and masked (by said peaks).


The root cause of inter-sample overs, and indeed of many digital signal processing errors and mistakes, is:

the samples as stored/conveyed are NOT the actual signal, but a particular representation of the information required to get back to the signal.

As such the raw stream of samples should never be regarded or processed as if it were the signal.

The signal emerges only after reconstruction. Reconstruction is the application of one (particular, well-defined and well-known) interpolation through the samples. As this interpolation snakes throught the samples (connecting the dots), and as the sampling theorem's anti-aliasing requirement assures that flat lines cannot exist, it follows that between any two samples at exactly the same level the interpolated line will have to exceed this level, ergo inter-sample 'overs'.

Normalisation should be done using a reconstructed proxy of the signal to infer the required gain.


--

The magnitude of the overs can be a much as a few dB, but generally this only occurs in a few pathological cases that have not much to do with real music.

One could guard against this by scaling back the DAC's oversampling / reconstruction filter. That this is not often done is because it really is not much of an issue, for the above reasons.
 
It doesn't need to because the dithering back to 16 or 24 bit before the analogue conversion restores levels.


S.

I suppose you are right about that with standard dithering and perhaps a fixed amount of scaling, but when the process incorporates a delta-sigma modulation element I'm not sure you can rely on being able to represent the higher waveform peaks in the 32-bit data without also scaling the 16-bit data down. Does that make any sense!?

Bare in mind we are talking about the inner workings of a DAC chip, not a mastering processor or something with significant processing power or allowance for latency. My understanding of how exactly that works is not good enough to really continue this line of conversation. You talk about processing at 32-bit and then dithering back to 16-bit but given the actual DAC inside the chip tends to be no more than 6-bit run with oversampling and modulation, is this actually what happens? Confuses me! :confused:
 
The distortion from inter-sample overs is generally a non-issue.

If the recording was heavily limited or even shredded to begin with, then the additional distortion from the inter-sample over will not be noticed in the forest of that limiting.

You might be right, but the level of distortion with some signals is high as shown in that video. About 3%-10%.

If the recording reaches only occasionally 0dBFS in an illegal way then the amount of inter-sample distortion will be tiny (the delta amplitude), instantaneous (a few samples at most), occasional (only the recording's very highest peaks), and masked (by said peaks).

Indeed, in the video it seems to use a sine-wave specifically created to cause the problem so it doesn't say much for how often this problem will occur in real music even if it is compressed and normalized a lot.

The video shows later how the same problem causes grief in lossy codecs which is much more of a current issue I think.
 


advertisement


Back
Top