advertisement


MQA 6 months later- have thoughts evolved?

There’s a piece on Facebook from Bruno Putzeys regarding MQA from the AES convention.
18 hrs ·
This isn't a prelude to suddenly becoming active on FB but I felt I had to share this.

Yesterday there was an AES session on mastering for high resolution (whatever that is) whose highlight was a talk about the state of the loudness war, why we're still fighting it and what the final arrival of on-by-default loudness normalisation on streaming services means for mastering. It also contained a two-pronged campaign piece for MQA. During it, every classical misconception and canard about digital audio was trotted out in an amazingly short time. Interaural timing resolution, check. Pictures showing staircase waveforms, check. That old chestnut about the ear beating the Fourier uncertainty (the acoustical equivalent of saying that human observers are able to beat Heisenberg's uncertainty principle), right there.

At the end of the talk I got up to ask a scathing question and spectacularly fumbled my attack*. So for those who were wondering what I was on about, here goes. A filtering operation is a convolution of two waveforms. One is the impulse response of the filter (aka the "kernel"), the other is the signal.
A word that high res proponents of any stripe love is "blurring". The convolution point of view shows that as the "kernel" blurs the signal, so the signal blurs the kernel. As Stuart's spectral plots showed, an audio signal is a much smoother waveform than the kernel so in reality guess who's really blurring whom. And if there's no spectral energy left above the noise floor at the frequency where the filter has ring tails, the ring tails are below the noise floor too.

A second question, which I didn't even get to ask, was about the impulse response of MQA's decimation and upsampling chain as it is shown in the slide presentation. MQA's take on those filters famously allows for aliasing, so how does one even define "the" impulse response of that signal chain when its actual shape depends on when exactly it happens relative to the sampling clock (it's not time invariant). I mentioned this to my friend Bob Katz who countered "but what if there isn't any aliasing" (meaning what if no signal is present in the region that folds down). Well yes, that's the saving grace. The signal filters the kernel rather than vice versa and the shape of the transition band doesn't matter if it is in a region where there is no signal.
These folk are trying to have their cake and eat it. Either aliasing doesn't matter because there is no signal in the transition band and then the precise shape of the transition band doesn't matter either (ie the ring tails have no conceivable manifestation) or the absence of ring tails is critical because there is signal in that region and then the aliasing will result in audible components that fly in the face of MQA's transparency claims.

Doesn't that just sound like the arguments DSD folks used to make? The requirement for 100kHz bandwidth was made based on the assumption that content above 20k had an audible impact whereas the supersonic noise was excused on the grounds that it wasn't audible. What gives?

Meanwhile I'm happy to do speakers. You wouldn't believe how much impact speakers have on replay fidelity.

________
* Oh hang on, actually I started by asking if besides speculations about neuroscience and physics they had actual controlled listening trials to back their story up. Bob Stuart replied that all listening tests so far were working experiences with engineers in their studios but that no scientific listening tests have been done so far. That doesn't surprise any of us cynics but it is an astonishing admission from the man himself. Mhm, I can just see the headlines. "No Scientific Tests Were Done, Says MQA Founder".

Keith
 
Does MQA really have any idea which Mic's and ADC's were used for every recording they've MQA'd? Of course they don't, its bollox.

But let's think for a moment that this could be done. Should it be done then? Maybe the mic and tape anomalies could be considered part of the creative process back then. Link Wray's distorted guitar sound resulted essentially from broken equipment. Should we fix it now if it was possible?
 
But what is this timing issue? How does it manifest itself?

Are you asking about the claims/assertions? Or the scientifically established reality? One of the problems here is that questionable claims are being made on the basis of keeping claimed details confidential, etc. Thus what we get isn't fully open to evaluate as science nor engineering.

BTW I'm an AES member but can't ever get to meetings, and don't 'do' facebook. So I'd need to find a way to download anything there. Is there a URL I might be able to use to wget or similar the content?
 
WRT "blurring". One of the issues in my mind is that the 'lazy downsampling' process (in one of the patents) generates lower frequency components from HF which the 'upsampling' generally won't remove. In general terms, their addition would be expected to *spread* the temporal extension of something like a pulse or edge shape. i.e. make the result more "blurred".

To check/assess fully this we'd need to know the full details... which have been kept confidential.
 
The idea that MQA can routinely 'correct' for the entire kit and mic setups of ancient recordings strikes as being a fantasy. Might be possible for a few special cases. But in general, a fantasy. The require info simply won't be available, even if the recordings contain any HF anyway. Many old studio mics have a response that is heading south before you get to 20kHz. And heaven knows what the Dolby A systems will be doing in many cases. To just give two examples.
If I were transferring tape to digital I would be doing the Dolby in DSP and correcting wow/flutter from both recorder and player by using the traces of bias.

Subjunctive Audio.

Paul
 
There were a few people who challenged Archimago on his website.

About what precisely? I was referring to the impulse responses measured from the output of the MQA DAC's tested when up sampling 24/96:

MQA%2B96%2BFilter%2BImpulse.png


From these measurements he constructed this filter:

Pseudo-MQA%2BFilter.png


I didn't see anyone challenge the legitimacy of these measurements in this particular Blog:

http://archimago.blogspot.co.uk/2017/07/measurements-mqa-filters-on-mytek.html

Thats not to say they weren't challenged in another blog of course but I've no desire to trawl through them :)
 
As I note in this thread above, no they don't; but they claim that when they do have those data, they can, the result is termed a "white glove" remaster. Where they do not have the full metadata they say they make informed guesses.

"Informed guessing" is an interesting use of words :) But fair enough, I do recall reading this.

I also recently enjoyed immensely the distortion effects of a Prima Luna valve amplifier through Spendor SP1 speakers fed with vinyl by a Heybrook TT1.

I personally don't enjoy the distortion in valve based systems but its nice to have the choice. If MQA ever got their way, we may no longer have a choice!
 
As I understand it the lossy part of MQA is above 22kHz, so probably not important.

No, it is actually exactly the opposite. MQA throws away part of the dynamic range of the sub-22 kHz signal in order to represent the above-22 kHz stuff. MQA overwrites lower-order bits of the 24-bit signal to store the "folded" high-frequency content. Not that it really matters, but...
 
If I were transferring tape to digital I would be doing the Dolby in DSP and correcting wow/flutter from both recorder and player by using the traces of bias.

Subjunctive Audio.

Paul


That's fine in itself. But makes unstated assumptions about the alignement, etc, of the actual individual recorder, etc, as it made the recordings many decades ago.
 
If MQA applies multiple correction curves at the remastering process to correct for mikes, tape records and A2D processors, then applies further correction curves to correct for the D2A characteristics then it is, in effect, unscrambling eggs.

But many recording engineers deliberately choose a mike for its character! Why would you want to “correct” for that? 40 years later? After the analogue tape has degraded? That is claiming you can unscramble an egg and get a chicken! Just nonsense!
 
That's fine in itself. But makes unstated assumptions about the alignement, etc, of the actual individual recorder, etc, as it made the recordings many decades ago.
I'm guessing you could walk the Dolby into alignment by ear. Not sure whether azimuth is recoverable. I'm assuming traces of the bias might be retrievable, and it will be much more stable than either of the transports. So a cycle by cycle SRC conversion would undo the errors. IIRC some of this is being done, but whether it is common I don't know. I wonder if you could deliberately use aliasing to recover the bias (up around 240kHz) with a standard audio ADC? Would the phase difference between the signal on the two channels allow the implied azimuth error to be determined?

Anyway much more interesting than the utterly pointless MQA.

Paul
 
But many recording engineers deliberately choose a mike for its character! Why would you want to “correct” for that? 40 years later? After the analogue tape has degraded? That is claiming you can unscramble an egg and get a chicken! Just nonsense!

More unscrambling an egg and creating a turkey :(
 
For anyone who is still duped by the claim that MQA can miraculously “correct”for the microphones used in the original recording, here is a recording engineer talking about a recent recording of the St. John Passion
The main array .. is a Decca Tree of Neumann TLM150 omnidirectional valve microphones. "..they have a real 'glow' and I keep coming back to them. .. “ Although the Neumanns have got the central role on this job, Philip's MKH800s are earning their keep: one pair as wide flanking mics to capture ambience, and a second pair pre-rigged for later parts of the recording .. My other choice as main pair is a pair of Sanken CU100Ks, the 100kHz omni mics: they're astonishing on piano, and I'll use them on main organ for the solos here.... As for the spot mics, "There are four spots for the singers. Because the vocal group is so small, it's nice to be able to have a mic per part. The are two U87s for the soprano and alto, and two M149s for tenor and bass. The tenor's M149 ends up as the main solo spot, and for the recitatives, the two main roles are on the 149s.” Most of the instrumental spot mics are Schoeps MK21 or MK41s, with a couple of DPA 4011s and a Neumann TLM170 on the double-bass.

So a large number of completely different microphones rigged and mixed in complex ways that change throughout the recording. All chosen deliberately. And MQA can “compensate” for that? Utter, total nonsense.

Full interview here
 
Some prefer turkey to chicken though, so if you convince people that you have created turkey from a chicken egg, then it must be better.
 


advertisement


Back
Top