advertisement


MQA pt II

A well written report Jim. Also very damning. MQA does not deblur... it actually adds blur!!

Quote below is from Jim's article linked above.
  • The MQA system is meant to avoid ‘blurring’ (their term) the time-domain details of the waveforms. Yet the above looks like the result of going though Tidal’s processing has lead to both of the output versions that result being ‘blurred’ with an asymmetric time window of up to 1 or 2 milliseconds.
  • The MQA and non-MQA versions look like being quite close to identical

One point to clarify: Note that at the above point I was talking about the obvious extent of the impulse 'tail'. The actual dispersion of the overall frequency response is more like 0.3 ms.
 
I need to play catch-up with various other things. e.g. a bucketload of files to backup, and various real-world 'housework' to do, etc. So I'll spend more time on that for a few days. But while doing so I'll keep playing with the Meridian MQA DAC, etc, and, erm, explore my way towards being able to do some calibrated measurements and comparisons using that.

I'll probably do the precise tests using a decent fancy laptop I got a while ago as that will help me check how the DACs noise floor varies with its power source, etc. I can use that on battery power as well as mains. Past experience shows this can matter with some other DACs. Indeed, I now have a favourite model of externally powered USB hub as it is useful at times to employ one.
 
No problem, we just need to wait for the main (best) proponent of MQA to come back. He has so much to answer since he last posted.
 
No problem, we just need to wait for the main (best) proponent of MQA to come back. He has so much to answer since he last posted.
Not even in jest. But I'm sure that someone who had sensible points to make sensibly, rather than loving the sound of their fingers tapping on a keyboard....
 
Will we get to an "MQA III" thread when this one tops a thousand? 8-]

The quality of a vacuum is indicated by the amount of matter remaining in the system, so that a high quality vacuum is one with very little matter left in it.

Yes you will.
 
Can I take it the folded (originally) Hi Rez file is low pass filtered to gave a similar effect to that of the all pass filtering the original 44.1 files are given ?

Near-impossible to tell with this material: the band of high-frequency noise in the hi-res MQA file, presumably a consequence of GS overloading the encoder, totally washes out both the encoded and the decoded/unfolded impulse response.
 
Near-impossible to tell with this material: the band of high-frequency noise in the hi-res MQA file, presumably a consequence of GS overloading the encoder, totally washes out both the encoded and the decoded/unfolded impulse response.

Actually, as I think I pointed out, I suspect that the asymmetry in the final cross-correlation *is* a sign that the same sort of dispersion *is* being applied.

All I did was to use sox to apply its filtering to get versions without the HF. The source and MQA-decoded versions were then cross-correlated. And the results show an asymmetry that look like what you'd expect if the audio has been dispersed by is pass though MQA encoding and decoding. i.e. consistent with the 22k-content version results. sox will have applied the same filter to both files, so is nominally common-mode.

This result used the wideband noise in the material. Not the impulse. Wideband noise is the ideal source for DFTS. That's why I chose the section that included it. And it gave the central maximum a nice 'spike - which is asymmetric. Losing the HF widens the spike in time. The result isn't absolute proof, but is a smoking gun. It is what you'd expect dispersion to generate in the cross correlation.
 
FWIW a lot of my working on instruments and signal processing when an 'academic' (sic) used inteferometry and coherence. It occurs to me to point out the following:

Wideband noise is a good source when you want to observe the effect of a 'filter' or similar process. In the simplest case if you you have full band 'white' noise, then cross correlate two copies of that specific noise pattern you will get a sinc function as the resulting interferogram. For sampled data this actually comes out to look like an impulse because the other offsets are a integer number of samples away, so hit the zeros in the sinc wave.

So if the (sox filered) versions of the input full-range file and the output MQA-encoded-decoded full-range file were the same, the interferogram (cross correlation) would have been essentially a single spike with no systematic side patterns. Seeing a pattern that is asymmetric indicates one 'version' differs from the other in a dispersive manner.

In practice with finite series of samples you get a 'noise level' which falls with the extent of the noise 'burst'. But given thousands of samples this is much lower, and looks like noise throughout the interferogram.

(In the specific case being discussed the 'triangle hill' which the spike sits on stems from a squarewave that is in the span of the cross correlated samples.)

Hope that helps. Maybe I forget that I've had decades of devising and using such approaches.
 
Near-impossible to tell with this material: the band of high-frequency noise in the hi-res MQA file, presumably a consequence of GS overloading the encoder, totally washes out both the encoded and the decoded/unfolded impulse response.

FWIW a lot of my working on instruments and signal processing when an 'academic' (sic) used inteferometry and coherence. It occurs to me to point out the following:

So if the (sox filered) versions of the input full-range file and the output MQA-encoded-decoded full-range file were the same, the interferogram (cross correlation) would have been essentially a single spike with no systematic side patterns. Seeing a pattern that is asymmetric indicates one 'version' differs from the other in a dispersive manner..
Looking at both these answers I was wondering whether Jim's finding is consistent with what one would expect if an MQA file had a had a minimum phase filter applied around 44/48Khz either while decimating a 192/176 khz file or in order to "deblur" a native 88/96khz file using a similar process to that mentioned by Werner in the case of a 44.1/48 khz native file?
I ask that question because I had assumed from what is known about the MQA process (including Werner's description of the process above in this thread.) that such a filter was bound to be used at least in the decimation example, and that it would inevitably introduce some group delay. But I'm not sure how much effect it would have below 22/24 khz.
 
Also Jim are you confident that the white noise would not have overloaded the MQA encoder ie that it won't be susceptible to the criticism levelled at the other test compnents of the GO file?
 
... if an MQA file had a had a minimum phase filter applied around 44/48Khz ... in order to "deblur" a native 88/96khz file using a similar process to that mentioned by Werner in the case of a 44.1/48 khz native file?

Picking nits, but if there is something going on like this it would be an all-pass, and not a minimum phase.

that such a filter was bound to be used at least in the decimation example, and that it would inevitably introduce some group delay.

I can't see what your origin for this is, for the 'bound'.

MQA decimation (192k to 96k) is bound to be minimum phase and leaky because that is what they believe in, but this does not pertain to an 88.2k or 96k original.

I believe I've also established a long time ago that the split/join filters operating at 22.05k (or 24k) to enable the folding very likely would extend both sides of their nominal cut-off point ('leaky'). I do not remember concluding that these filters would have to be non-linear-phase, though.

Also Jim are you confident that the white noise would not have overloaded the MQA encoder ie that it won't be susceptible to the criticism levelled at the other test compnents of the GO file?

Actually I don't think the encoder was overloaded to the point of breaking: it was driven hard, much harder than music would do, and thus its response is untypical in that there is now simply too much of the shaped noise MQA employ to hide the jewels in.
 
Last edited:
Also Jim are you confident that the white noise would not have overloaded the MQA encoder ie that it won't be susceptible to the criticism levelled at the other test compnents of the GO file?

What I can say is that the peak of the xc goes *very* close to 1. This implies that the input and output are very similar. The cavil is that this only applies to the sox-filtered versions which remove most of the HF cruft. So any 'overloading' hasn't affected this much I suspect. Any peak clipping, for example, would have dropped the peak.
 
RoA.
I am still curious as to your position on MQA as a music codec.

Apologies for the late reply, I wasn't trying to ignore you :).

The answer is I like some of the results of it. Unfortunately my new Actives don't fully unfold/decode/de-blurr ... (did I forget anything?) so listening is restricted to my smaller room system.

It doesn't worry me much as a weapon of world domination though so I can still sleep at night.
 
Thinking about this, I suspect we can actually now xc the full-fat input and output. That will lower the peak because the garbage at HF isn't in the source. But that'll probably end up as an interferogram noise floor and the peak may still poke up clearer. So maybe I or someone else should try this. FWIW my xcors were about 22 sec long IIRC. longer would reduce this 'noise floor' but would also reduce the contribution of the noise burst in the source. probably worth a try. But I won't be doing it soon as I'm still catching up with other things.

People may find similar xc's on the 2L files of interest. :)
 
P

I believe I've also established a long time ago that the split/join filters operating at 22.05k (or 24k) to enable the folding very likely would extend both sides of their nominal cut-off point ('leaky'). I do not remember concluding that these filters would have to be non-linear-phase, though.

I don't think they have to be. I suspect it just something they want done.
 
In a hurry, so need to check when I can, but...
http://jcgl.orpheusweb.co.uk/temp/EvenMoreInteresting.png
just got this xc between GOs original and unfolded files. i.e. as provided, not processed down by sox.
Looks familiar?...

As this is 88k the time span is half that of the 44k sox-downsampled files. So omits some of the file that the earlier xc included. i.e. no squarewave. However if I've not made an error we should be grateful that MQA does a decent job of making its process noise statistically like noise, so an xc can see though it. 8-]

Good idea if others can try something similar to check as well. I'll given more details (or correct the above if I messed up!) later on, but have to do some other things first. Just thought the result was 'interesting' so gave in to the urge to make it available.
 
Brilliant!

So it appears safe to assume now that MQA applies EP3029674A1 to both 1x and 2x rate source material, or at least in the cases where MQA assume that the recording anti-alias filter was linear phase (which is 99.9% of cases today anyway).
 


advertisement


Back
Top