advertisement


MQA pt II

OK, here is 'page 2'.

http://www.audiomisc.co.uk/MQA/GoldenOne/ChallengeAndResponse.html

This concentrates on the GoldenOne files. They do show some useful points, but difficult to use beyond a certain level.

I now have a wider range of 2L examples, and an MQA DAC. Have some other things to catch up with first. But will start using these and see how I get on. Have various possible experiments/tests in mind, but initially I need to 'calibrate' what the DAC does in more normal terms so I can allow for that when it comes to MQA vs plain vanilla.
 
Thanks.

But the filter produces a dispersion of around 0·2 msec across the range from near-dc to 22kHz. What isn’t clear is: Was this dispersion been added by Tidal for some reason? If so, why? If not, was it added by the MQA encoding, and if so, why?

I posted the answer to that already here https://pinkfishmedia.net/forum/posts/4378936/. It is an all-pass filter adding delay to the high treble, so as to move ADC-originated pre-ringing to behind the main impulse.

In essence, output to the right of this line was added by the MQA encoding-decoding process.

MQA simply specifies that all CD-rate material must be played with a leaky, minimum-phase, non-apodising filter. The same filter is by default ON in the Explorer2 DAC:

616Meex2fig2.jpg


616Meex2fig4.jpg


See https://www.stereophile.com/content/meridian-explorer2-da-headphone-amplifier-measurements.
 
Thanks.



I posted the answer to that already here https://pinkfishmedia.net/forum/posts/4378936/. It is an all-pass filter adding delay to the high treble, so as to move ADC-originated pre-ringing to behind the main impulse.



MQA simply specifies that all CD-rate material must be played with a leaky, minimum-phase, non-apodising filter. The same filter is by default ON in the Explorer2 DAC:

616Meex2fig2.jpg


616Meex2fig4.jpg


See https://www.stereophile.com/content/meridian-explorer2-da-headphone-amplifier-measurements.

By 'CD rate' do they mean when playing *non* MQA material on a coventional CD, or when decoding MQA, or?...

Either way, why *apply* that before it gets onto the CD? It then just convolves with the player/DAC? Or am I not understanding this? is it too weird for me? 8=>

Given that they insist the input must be 'music' not test waveforms, how can they know with certainty what ADC filter was used in every case? So far as I know, no-one asked GO that. So do they take it for granted, regardless?

And surely (pace the' Airplane' films) this conflicts with the anxieties about 'de blurring'?

is it that they want to 'deblur to their preferred blur behaviour' regardless of whatever the original providers of the input *wanted*? Or?...

My basic question is: I can see what they are doing, but am baffled by them being determined to do it. So what *are* they thinking?

P.S. I'm now even more pleased that I now have an MQA-capable Meridian DAC as it will be interesting to see what it makes of a 'Wave From Hell' cf
http://www.audiomisc.co.uk/HFN/OverTheTop/OTT.html
 
DON'T CALL ME SHIRLEY!

By 'CD rate' do they mean when playing *non* MQA material on a coventional CD, or when decoding MQA, or?...

They don't make it easy, but with "CD-rate MQA" I mean MQA-encoded music that did originate at 44.1k or 48k, as opposed to MQA-encoded music that originated at a higher sampling rate (2x, 4x, ... and ignoring fake upsamples).

this conflicts with the anxieties about 'de blurring'?

MQA's aim is to get rid of any impulse response pre-ringing caused by the ADCs, DACs, and sample rate convertors in the music creation and replay chain. Their deblurring has nothing to do with dispersion or non-linear phase shift.

For CD-rate material MQA assumes that during recording half-band filters were used at 22.05kHz, and they attack the corresponding ringing pattern with that ring-shoveling all-pass filter that is nicely exposed in GoldenSound's 44.1k file.

If you think this is daft then you are right.
 
If you think this is daft then you are right.

Well, it had to happen eventually. :)

Problem is that the 'reasoning' make so little sense that I find it hard to accept, even if they do! If I put that on a webpage people will assume I'm not understanding them... which I guess I don't!

Nasrudin again.
 
I'm only just playing with the Explorer 2 as yet, but did get this just now out of curiosity

http://jcgl.orpheusweb.co.uk/temp/wfh-exploder.png

Wave from Hell. DAC fed 96k rate. The waveform should be time-symmetric. But isn't. It should also cut off at half the rate (The ADC is running at 192k) but doesn't.

OK, it's not 'music' so far as most people would say, though. 8-]

That's half of the size of WFH that generates spikes to about +5dBFS, so should be no clipping as is.

Not reliable as yet as I need to do checks, get better, shorter cables, etc. This is just a lash-up at present as a starter to see how it goes.
 
Money, money, money.

TBH I get more of a feeling that they believe their own ideas and just want everyone else to agree with them, convinced they are correct. Which is fine when we all have the ability to choose alternatives and do fair comparisons. My problem is that to me this makes no sense from an IT POV. And if they are applying things like a blanket dispersion to everything it seems really weird. However given the dearth of clear probe-response tests we can't tell at present. That's where the difficulty is if they are determined to block such checks.

...which seems odd if we accept they are convinced they are right. If so, no harm in tests. Just give an explicit, useable definition of what they regard as 'music' and let people probe that, etc. As things stand it is a riddle wrapped up in an enigma, which makes it look more like a religious faith, not science or engineering!
 
Or perhaps just that I'm behind the times. But the key puzzle there is if they *blanket* apply dispersion.

When you consider the types of mics that fed the ADCs, and how much these things vary it does seem to be a Nasrudin affair. I did a few years ago try to find out reliable data on mic types that have been used. Not far short of unobtainium, perhaps because it is felt best if we don't know... But ones I've seen 'ring and die' below 20 kHz even when optimally used for a measurement a maker uses to promote them. Let alone the effects of proximity, angle, etc. And someone in the recording biz IIRC told me that one maker used to cut and paste the same plot for all their mics.

IIRC Galbraith once said that "The worst misfortune that can befall an economist is to have his theories put into practice". Useful as a maxim in other areas, I'd suspect.
 
BTW when I read the USB parameters from the Explorer it lists the 2byte transfers before the 4byte ones. This means that by default some software may pick the 2byte interface. It did that for my initial play-about. Can be dealt with, but only if the user and software know how to deal with it and not just grab the first mode that gives the required rate.

Reminds me of the BBC assuming for ages that their old i-player 'flash' stream was fine for Linux, when it wasn't. They send 48k and when they checked they got out 48k. Snag was that the Linux Flash code maxed at 44k1. But the default settings of their player (and Pulse (sigh)) duly converted it back into 48k as that is the 'standard', innit. I noticed because I used to beat PulseAudio to death with a stick whenever I installed. 8-] Fortunately, ancient history now.
 
We interrupt this discussion, to thank Brother Mansr:

Some more NRK fun:

... I've been giggling at that for far longer than I should have ...


NB back on topic - how happy would everyone be with proprietary manuscript reader that damages the original - yet at the same time promises to unfold nuances into the text margins you hadn't had reason to think could matter. While slightly-obfuscating the main text.

Reads like BS to me. Now, again, which way-up should I hold this thing ..?
 
Jim can I just check I have understood your result.
1) material with original sample rate 44.1/48 khz will have an all pass minimum phase filter applied to the folded file, which is not reversed when unfolded.
2) material with a higher original sample rate seems also to show signs of dispersion even when decoded-
since you detected it in the sox generated 44 Khz fs downsampled version of the decoded file I assume that this means that this dispersion is also at all frequencies? (ie it doesn't just kick in over 22khz material).
 
1) there is no folding/unfolding with original 44.1/48 material, just the 2x oversampling with a specific lazy filter upon replay, which we knew already, and the preprocessing during MQA file creation with that all-pass filter, which is newly found thanks to the GS test and me suddenly remembering that funny old patent.
 
1) there is no folding/unfolding with original 44.1/48 material, just the 2x oversampling with a specific lazy filter upon replay, which we knew already, and the preprocessing during MQA file creation with that all-pass filter, which is newly found thanks to the GS test and me suddenly remembering that funny old patent.
Sorry, yes I’m getting confused with the “unfolded” 44.1 file in Jim’s article. Can I take it the folded (originally) Hi Rez file is low pass filtered to gave a similar effect to that of the all pass filtering the original 44.1 files are given ?
 
OK, here is 'page 2'.

http://www.audiomisc.co.uk/MQA/GoldenOne/ChallengeAndResponse.html

This concentrates on the GoldenOne files. They do show some useful points, but difficult to use beyond a certain level.

I now have a wider range of 2L examples, and an MQA DAC. Have some other things to catch up with first. But will start using these and see how I get on. Have various possible experiments/tests in mind, but initially I need to 'calibrate' what the DAC does in more normal terms so I can allow for that when it comes to MQA vs plain vanilla.
A well written report Jim. Also very damning. MQA does not deblur... it actually adds blur!!

Quote below is from Jim's article linked above.
  • The MQA system is meant to avoid ‘blurring’ (their term) the time-domain details of the waveforms. Yet the above looks like the result of going though Tidal’s processing has lead to both of the output versions that result being ‘blurred’ with an asymmetric time window of up to 1 or 2 milliseconds.
  • The MQA and non-MQA versions look like being quite close to identical
 
Can I take it the folded (originally) Hi Rez file is low pass filtered to gave a similar effect to that of the all pass filtering the original 44.1 files are given ?

Probably not.

Hi-res MQA follows a completely different path: if higher than 2x downsample with leaky filter, when arrived at 2x, fold into 1x bottom space. Presumably they could add the all-pass filtering if the original was 2x, now at ~40kHz instead of ~20kHz. Have to see if there is a trace of this in the GS files.
 
Sorry, yes I’m getting confused with the “unfolded” 44.1 file in Jim’s article. Can I take it the folded (originally) Hi Rez file is low pass filtered to gave a similar effect to that of the all pass filtering the original 44.1 files are given ?

I was trying to stick with GO's naming of the files. Complicated by the reality that various files had different LPCM rates to the rate-limit of their content.

But as Werner has explained, yes, seems that even the file that originally had zip above 22kHz ends up being dispersed as well as gaining 'invented' stuff > 22kHz when MQA decoded. So it looks like the waiter adds ketchup to what the chef prepared, without asking you.
 
Probably not.

Hi-res MQA follows a completely different path: if higher than 2x downsample with leaky filter, when arrived at 2x, fold into 1x bottom space. Presumably they could add the all-pass filtering if the original was 2x, now at ~40kHz instead of ~20kHz. Have to see if there is a trace of this in the GS files.

FWIW I stopped having used the sox 'filtered' (downsampled to base rate) files. That yielded some info. However trying to work on the unfiltered full-fat files though the MQA-added cruft generated by the encoder made me decide it would be easier from this point to work with 2L files and the MQA decoder.

However: note the cross-correlation of the sox-filtered comparison. Shows dispersion.
Note that this *is* the chain where GO sent in an 88k sample rate file with full range content. i.e. it is evidence that passing GOs full-bandwidth test file though MQA encoding and decoding added the dispersion to the files that are high-rate content at input and output. And that this dispersion remains in the decoded output.

(The sox filtering is common-mode here, so won't have generated dispersion.)

I doubt Tidal asked him what ADCs he used, but someone would have to ask him about that.
 


advertisement


Back
Top