advertisement


MQA pt II

Re Auralic: the issue can be settled by anyone with an Auralic DAC and a decent ADC. If it replays a true hi-res MQA file with correct reconstruction of the signal between 24k and 48k then it is decoding MQA indeed. How they got to the information in order to do just that will determine their legal position in this because without a license in place this information is not available.

On the other hand, if they are only replaying MQA through a fancy filter then they are not decoding. But they would be on the right side of the law. Sort of.

Auralic streamers process (I won't say decode!) MQA files in the digital domain and then feed the resulting LPCM file to any connected device. So I guess all that someone with the required analytical equipment need do is to capture the output of an Auralic streamer. This is beyond my limited abilities, but probably not your own, mansr's or Jim's.

As for how Auralic were able to do what they have done without an MQA license is anyone's guess, but they are a good sized Chinese company and, as such, do not operate under the same legal system as any of those in the US, UK and EU.
 
Re "MQA again" - My Stereophile subscription will stop at the end of the current subscription period. Period.

Sorry, the "MQA again" article removed due to copyright. I apologize to Stereophile and @TonyL.

Edited twice.
The MQA'ed version that sounds so much better than FLAC should be available on the interweb soon.
 
Last edited:
1) the critique on the 88.2k test file is valid: the encoder got overloaded and what you see now is not typical for MQA. This, however, does not mean that one cannot glean some useful things from the test.

2) the critique on the 44.1k test file is not valid. At 1x rate there is nothing in MQA that precludes the presence of high levels of treble, just as in standard CD.

I just wished that the public attackers of MQA would read Sun Tzu first, and then be surgical and at the right time.


Post-Shannon sampling my ***.
 
I think that all the MQA secrecy combined with the anger and obfuscation from the dis-united friends of MQA may cloud or darken the sound judgment of the non-friends of MQA. Plus there may be an intoxicating sense of being on the side of the good.
 
Maybe did not express my self too well. What I meant is that while the sampling theorem prescribes the sinc function as the necessary reconstructor, it does not so for the anti-aliasing filter at the recording stage. It merely posits that the signal has to be band-limited, offering clues nor restrictions on how to do that.

So while sinc is the perfect reconstructor, it has no such elevated status at the ADC side.

This should be obvious.

You say "does not". That decscribes what people have done, but not what they *should* have done!

I'd agree that back in ye early days the sampling was difficult and crude by modern standards. But from a technical POV there is no excuse now for not using an ADC process that follows the basic requirement quite closely. Yes, you need a physical analog way to band-limit at the front. But you can , for example, do that and the original sampling at a very high rate and then digitially resample down in the ADC using a sinc-like *symmetric* filter. That can then get symmetric flat output as close to Nyquist for the output as its filter length sets. The only cost is a latency for the process span of the filter.

Information Theory tells you this *is* needed if you wish to ensure time alignment and accuracy. And fully define the data as a way to convey the information content. So theory does 'mandate' it, and cautions you ignore this at your peril... as we now see.

They problem is that it may fall into the "cannae be bothered, Hen!" catagory so far as recording studios, etc, are concerned...

Which then opens us up to problems because the data isn't well defined, and hence isn't the full information content we seem on the surface to be offerred... leading to all the malarky about reconstruction filters, 'de blurring' etc, etc. Whereas in the real world *before* the ADC - and in the world after the DAC - bigger effects usually bugger up the time alignment, etc, anyway. Nasrudin Rulz, alas.

The point is that it SHOULD have its *correct* status at the recording stage. Trying to correct it afterwards is pretty futile. though, when it ignores other factors. And a tad bonkers if you then apply dispersion again at the *rendering* stage! In essence the recording biz is admitting it made a dog's dinner of the recording when (except for ancient times) there was no need to do so.
 
Again, this is off the mark. We are talking audio here, so of course band-limiting applies and it can be taken as an implicit part of sampling.

The point is, when sampling at a mere 8kHz the nature of the AA filter, linear phase, minimum phase, or even maximum phase (is that term even actually defined?), can be expected to be audibly perceived differently. But this difference disappears with increased sample rate.

Because the filtering is then a better match to what we expect to be able to hear. You are talking about audio percieved by human hearing and its expectations.

Yes, dispersion and phase differentials are easier to hear at LF, despite Ohm's Principle.

However the basic point remains: To properly define how the 'data' can be rendered as 'information' we need to know how the data set was generated. i.e. we need the recording filter to be defined and then use that info at the point of rendering. And there is no need to fiddle with the timing, so the IT default is the closest approach you can get to the sinc function. This repesents the standard. By failing to do that, people open up the Pandora's Box that leads to where we are. Which - in IT terms - is a guddle!
 
I very much doubt that the Auralic software produces anything remotely similar to the a real decoder. If it did, they wouldn't need to be so vague about it, and they would also have been sued to oblivion by MQA.

So what happens if someone simply runs the MQA though a filter that corrects the dispersive ones shown from analysing GO's files?
 
My Stereophile subscription will stop at the end of the current subscription period. Period.

My copy is sitting on the floor waiting to be read.

As a somewhat disinterested onlooker who has no actual use for it I find the whole MQA thing fascinating. The reason it is so interesting is how, unlike much of audio, it actually simultaneously spans pretty much all areas of real controversy within this market. I think I'd use the following category headings:

a) Political; corporate behaviour, closed-loop proprietary technology, licensing, Right To Repair, lack of test data etc.

b) Technological; how does it actually work, what is the evidence, does it’s performance meet the marketing claims etc, how ‘lossless’ is it, can it ‘correct’ a full studio to end-user encoding chain etc etc?

c) Subjective/objective; is it ‘transparent’ or is it ‘coloured’, can you spot it on a blind-test etc etc?

It is rare to find something that is top-tier argument fodder in every single category!

From my perspective a) is the area that interests me the most. I guess I’m one of Jim Austin’s ‘internet libertarians’ in this regard. I just don’t see a need for a new proprietary licensed format in a world that already has copious bandwidth, FLAC, Apple Lossless etc. If it is better subjectively and people prefer it then make it open source and more long-term sustainable and environmentally responsible by not enforcing closed-loop proprietary technology on an increasingly open and distributed music industry.

PS Obviously b) and c) are both hindered by a).
 
[My Stereophile subscription will stop at the end of the current subscription period. Period.

FWIW I regret that I no longer get Stereophile. I used to find it interesting. However I gave up subscribing because the arrivals of issues were so delayed and variable in timing that I kept finding I only realised some would never arrive when it was too late to get a replacement. I also missed issues due to the sub lapsing before I'd realised for much the same reason.
 
I guess I’m one of Jim Austin’s ‘internet libertarians’ in this regard.
Funny that Morten Lindberg and Jim Austin both start blaming MQA opposition on "open source" or "internet libertarian types" at exactly the same time. It's almost as if someone had handed out a new sheet of talking points.
 
Here's something I wrote in response to Jim Austin three years ago regarding high-frequency content and "post-Shannon" sampling:

Very well, let's suppose there is something that matters above 40 kHz. What does MQA do with it? Starting with a recording at 192 kHz or higher, some unknown processing is applied, then the signal is downsampled to 96 kHz using a rather weak anti-aliasing filter. We know this because looking at recordings with some distinct content above 48 kHz (and these are rare indeed), faint alias products are recognisable in the lower frequencies of the decoded MQA file. The attenuation appears to be around 50 dB, but this is a very rough estimate.

The 96 kHz signal then undergoes band splitting, the top half compressed and encoded into the low 8 bits of the final stream. This step actually seems to work quite well in that the decoded output is pretty close to the input, at least for typical music and within the target precision. However, as clever as it may be, this scheme is wholly unnecessary. Standard methods, such as FLAC, perform equally well. As Xivero have demonstrated, the efficiency of FLAC can be further improved by preprocessing the input to remove non-information-bearing noise in the lowest bits. Needless to say, this process is not entirely lossless with respect to the input, but then neither is MQA. The Xivero method is also superior in that the output is a fully compliant FLAC file playable on any existing device without firmware updates or additional software. Of course, there are no royalties for Bob either.

Then comes the so-called rendering stage. As revealed by my reverse engineering, this consists of nothing but textbook FIR upsampling followed by shaped dither, usually at 16 bits. That last part is especially interesting. The images of the low frequencies left by the leaky upsampling filters, which is where any useful content must reside, are to a large extent buried under random noise.

To recap, whatever smidgen of useful signal identified by MQA in the high frequencies has, by the time it reaches the DAC, been attenuated, aliased ("folded" in MQA newspeak) into the much stronger low frequencies, compressed, uncompressed, imaged ("unfolded") back to the high range along with the mirrored spectrum of the (still much stronger) low frequencies, and finally drowned in random dither noise. "Post-Shannon" or not, nothing can survive this mangling and still be recognisable, let alone useful. If I'm wrong, show me the maths.
 
Funny that Morten Lindberg and Jim Austin both start blaming MQA opposition on "open source" or "internet libertarian types" at exactly the same time. It's almost as if someone had handed out a new sheet of talking points.

In fairness it is the obvious counterpoint to defending locked-in fee-based corporate technology. I’m not looking for any conspiracy theories here, though clearly there is a political aspect to this.

PS The thing I want to see more of is input from musicians. It is their art/intellectual property that is being processed, their revenue stream finding another middle-man taking a cut.
 
Reading the page from Stereophile one bit strikes me as strange...
http://jcgl.orpheusweb.co.uk/temp/Stereophile-bit.jpeg

I can see their logic in removing the 88k-content example as it did 'stress' the encoder to a high degree. However it implies they then blocked the tester trying a *less stressful* test. i.e. IIUC he was simply trying to establish the limiting envelope and what level of performance you then got. This seems to me a legitimate thing to so. As it is, a key question is how anyone can know the precise envelope, etc. Simply waving your hands and saying "music" is useless in terms of data processing.
 
In fairness it is the obvious counterpoint to locked-in fee-based corporate technology. I’m not looking for any conspiracy theories here, though clearly there is a political aspect to this.
The funny thing is that this angle hasn't been pursued by the MQA promoters even once (that I've noticed) in the past years, and now suddenly TWO of them start down that line of argument at the same time. Too much of a coincidence. Expect to see more of it.

Coincidence or not, it is a poor argument that confuses the content with the delivery mechanism. Of course creators should be compensated for their efforts. I am happy to pay a fair price for a recording. My objections to MQA are twofold:
1. I do not want a portion of what I pay for a music album or playback equipment to go to Bob Stuart when he has contributed nothing of value.
2. When I buy music, I expect to be able to keep playing for ever, not only as long as Bob Stuart chooses to sell MQA licences.

The second point means that a closed format is unacceptable to me. Not because I don't want to pay those who deserve to be paid, but because it will some day become unplayable, thus taking away from me what I rightfully own. It's like selling books printed with invisible ink that can only be read under a special light. This has nothing to do with open source or "internet libertarianism" (whatever that is). If Bob Stuart wants to sell a closed-source FLAC encoder or decoder, he is more than welcome to do so. As long as the interchange format is open, I don't care if people make money processing it. The caricature they attack simply doesn't describe me nor, I suspect, many others who are opposed to MQA monopolising music distribution.
 
The second point means that a closed format is unacceptable to me. Not because I don't want to pay those who deserve to be paid, but because it will some day become unplayable, thus taking away from me what I rightfully own.

To my understanding, with the exception of a handful of MQA CDs, which I’d bet very good money will never catch on as a format, it is a streaming format, i.e. something you lease, never own. This is why my main arguments are firstly opposing yet another unnecessary parasitic fee in the chain between musician and music consumer, and secondly from the Right To Repair perspective (i.e. unfixable proprietary landfill DAC crap etc). If that makes me an ‘internet libertarian’, ‘open source activist’ or whatever then I’ll wear it with pride.
 
Tony, When a single corporation's aim is to control so much of the music (audible sound) industry it's a big cause of concern for me. Sitting back and letting it happen is not a good option. Choice must be respected and being told its better (Authenticated, De-blurred, Lossless, Neuroscience approved, etc.) is not good enough, we need scientific PROOF. People must be left to decide if they want it and not a single corporation, whose land grab is a move to dominate. If it turns out artists and fans are being exploited, it will be a disaster.
 


advertisement


Back
Top