advertisement


MQA bad for Music?

If you go back to some of the older MQA papers then it appears that the first method is, in principle, allowed/supported. However, even these papers hint that when used as such, only the lowest frequencies of the 48-96kHz band get actually folded and encoded.

The second method, the one you have locked onto, indeed uses lazy downsampling on a 192k or 384k original, to obtain a 96k copy, which is then folded into 48k.

For what it is worth ... I have now analysed a fair amount of Tidal MQA tracks that light three LEDs on the Explorer2, i.e. tracks that announce themselves as of at least 192kHz sample rate.

What I see now is that, at the output of the E2, all of those tracks that contain enough high frequencies to allow me to observe what is happening, use lazy upsampling to extend from 96kHz to 192kHz and higher.

This seems to confirm that the aforementioned double-folding is in effect not used, or at least not often enough to show up in my sample tracks, and thus that MQA, in practice, only conveys the equivalent of 96k, not 192k, and certainly not 384k.

Right: So to make sure I'm following correctly. Am I correct to assume that your examinations show results indicating that, when encoding, they are:

First, using the 'lazy' downsampling (which I call 'origami') to go 192k -> 96k.

Then using what I call 'bistacking' to go 96k -> 48k.

And that although the 'use bitstacking twice' method is covered by their patents, etc, that doesn't look like what they're actually doing?

If so, that makes some sense to me as I'd expect it to be less damaging than trying to cascade bistacking.
 
I don't really find this misleading. From day one we were led to expect this. And anyway, 96k is hires. It is only that the third LED gives an idea of the rate of the original master, and not of what you are actually receiving.
I agree about 96 being hires, but I think it was misleading (at least in places). Although the lazy downsampling/orgami idea was in the patents it was not as I recall anything which could be deduced from the marketing materials which implied (and I think in places did more than implying) that some more than 96khz was being provided. [of course some us read between the lines to see that the patent docs in effect admitted that 96khz was quite enough- but that wasn't the sell.]

The key distinction in this regard IMHO is that people were persuaded that they were getting something more than 24/96 and without a second level of bitstacking I just don;t think that stands up. As we know the lazy downsampling could be done to create ordinary 24/96. And what's more imagine if someone tried to sell lazy downsampled 16/44 and claimed it was in effect 96kHz?
 
I've always felt that there has been a fundamental clash between the two ideas that:

A) We 'need' high temporal resolution extending up to the levels claimed by MQA.

B) That the 'apodising' filters do this well.

The reality is that the 'apodising' filters of the kinds specified by their very nature have to roll away the HF and have a wider temporal peak than classic sinc shapes.

So far I've tended to focus on 192k <-> 96k <-> 48k discussions and analysis. But it seems obvious that the use of the 'lazy apodised' filters means each process-down stage would tend to lose any of the stuff folded back from the previous one. i.e. Starting with 384k you'd lose essentially what might be contained above 192k in terms of both temporal resolution and spectrum because it would be too far away from the top of the 'lazy' filter passband used for the next downconvert.

The of using 'bitstacking' repeatedly for each halving of rate seems like a nightmare to me. The argument for the bitstacking is that you 'hide' the HF 'hints' as low level 'noise' at the top of the output band. Trying to preserve that in any detail when you repeat the process seems rather a challenge to me. In effect it will look like 'noise' and be discarded.

The only way around that would be a side-chain in the processing that sweeps all the 'hints' from every stage along another data path to hide at the end of the process.

However you cut it, though, it all seems a weird way to deal with the real practical problem that the sizes of the files is being bloated by too many bits wasted on noise. Reminds me of the comment that "To someone wanting to sell a hammer, all problems look like a nail." :)
 
Jim, have you had a listen to any MQA music yet, or is your interest purely technical? Not a dig at you, I'm just curious.
 
No, I've not listened to any of the files using any kind of MQA decoder. I have listened to the various '2L' test/demo files in various formats, inc MQA (being played as LPCM). Indeed, I think many of the 2L files are pretty good.

TBH I doubt my opinions of the 'sound' would help anyone much because I can't claim to have 'golden ears'. And these things will probably also depend on the kit you use, etc.

In practice I tend to be quite happy with *well produced* material from Audio CD to 96k/16. I'm far more often bothered by poor recording, mastering, or manufacture than by the sample rate, etc.

I'm happy to accept that a wider bandwidth, etc, may benefit some people more than myself, and others will have sharper hearing than mine! But for me the main effect of high sample rate is in terms of allowing any kit flaws to have their consequences shifted further away from the region I/we can hear.

My interest here is primarily technical. But also wondering if this is what others have called "a solution looking for a problem". And if simpler methods would do the job as well, or better, whilst being entirely open and evaluatable/improveable by anyone interested.

My personal conclusion is that there are more than one 'open' alternatives that would do the job as well or better. That, for me, means if a streamer, etc, adopts MQA they may have reasons *other* than simply reducing the required streaming rate.

I'm an injuneer. :) My main interest is to do what engineers do. Take something apart, test it, try and understand it, and then explain how it works and/or point out any limitations or flaws so people can make an *informed* choice - and also see if they can devise something 'better'.

I had no real preconceptions about MQA when I started my look. And I do have a high regard for both Bob Stuart and Peter Craven. Two of my favourite DACs have been Meridian ones and Peter's work is very interesting and ingenious. Sadly, when I started trying to understand MQA as described in the patents I came to feel it actually misses the real point.

Possibly they may have been mislead by the 'scientific' papers they quote wrt temporal resolution, etc. And became so interested in that one point that they haven't twigged that the size of the flacced files is being inflated by excess noise. i.e. The *practical* problem facing people who want high quality with smaller stream rates simply doesn't require anything as fancy and complicated and 'new' as MQA. And the 'temporal' argument is something totally different to the question of what is making stream sizes so big.

A lot of engineering should be based on Einstein's argument. "Everything should be as simple as possible."

My feeling here is summed up by the 'Mullah Nasrudin' story which I quoted in an MQA thread a while ago. I have the feeling that focussing on so much on the ultra-narrow 'temporal resolution of the ADC-DAC part of the chain seems to ignore what are almost certainly orders of magnitude larger such effects before the ADC and after the DAC!
 
This is all massively off-topic, which was about the impact on the music industry and not the technology itsself.

Erm, the performance of a 'technology' and any associated costs, limitations, etc may well have an "impact on" the music if it is adopted. To assess what that "impact" might be, it helps to understand the technology.
 
to liken the music biz to a rain forest eco system the impact of of MQA will be that of a single squirrel scratching its arse just once . Some other animals engaged in the manufacture of dacs or hoping to leverage cash from the distribution of hires tree music will have their snouts put out of joint .
 
to liken the music biz to a rain forest eco system the impact of of MQA will be that of a single squirrel scratching its arse just once .

The problem with analogies is that they may or may not be relevant.

I'm old enough to recall EMI's attitude that they were big enough so that they could ignore Audio CD and it would go away. Given that the music biz tends to be dominated by a few very large companies, that probably made sense to their suits at the time. However Philips and Sony had enough clout to see it survive.
 
Ok - impact of mqa will be tiny in context of the impact of streaming has already had.

Probably true in the sense that most users of streaming have no real interest in 'hi fi' or getting 'high rez' quality so won't care one way or the other beyond possibly surface issues like cost or seeing mqa as a 'brand'.

But less clear when we consider those with a serious interest in getting the highest possible sound quality from an industry that does often mess things up.

So it may depend on what people mean by "bad for Music".
 
Probably true in the sense that most users of streaming have no real interest in 'hi fi' or getting 'high rez' quality so won't care one way or the other beyond possibly surface issues like cost or seeing mqa as a 'brand'.

But less clear when we consider those with a serious interest in getting the highest possible sound quality from an industry that does often mess things up.

So it may depend on what people mean by "bad for Music".

Seems likely that in the absence of stunning audio benefits, there will continue to be a subset of new albums and old albums available as MQAs, which will be listened to 1) by people that happen to buy a DAC that is MQA decoding, and 2) people who stream Tidal Hifi and happen to hit upon an MQA master in their trolling about. In most cases I bet they won't even know it unless they see the word MASTER instead of HIFI on the Tidal app. Don't see a lot of purposeful movement by audiophiles towards broad MQA implementation, and even less interest in general public.
 
Seems likely that in the absence of stunning audio benefits, there will continue to be a subset of new albums and old albums available as MQAs, which will be listened to 1) by people that happen to buy a DAC that is MQA decoding, and 2) people who steam Tidal Hifi and happen to hit upon an MQA master in their trolling about. In most cases I bet they won't even know it unless they se the work MASTER instead of HIFI on the Tidal app. Don't see a lot of purposeful movement by audiophiles towards broad MQA implementation, and even less interest in general public.

In principle, such a "mixed economy" is fine and would give people diverse choice. The worry, though, is that - as with HDCD - we will get mess-ups by the music biz and find that what should be 'plain vanilla LPCM' is in some cases actually mangled MQA-altered but without the 'magic code' that identifies it as such.

Some examples of this with HDCD are then fouled up for both LPCM *and* HDCD replay. With no sign 'on the box' of what has been done by someone clueless along the way.

In enginering KISS makes sense.
 
Probably true in the sense that most users of streaming have no real interest in 'hi fi' or getting 'high rez' quality so won't care one way or the other beyond possibly surface issues like cost or seeing mqa as a 'brand'.

But less clear when we consider those with a serious interest in getting the highest possible sound quality from an industry that does often mess things up.

So it may depend on what people mean by "bad for Music".

I suppose it could usurp other hires formats that it may be inferior to in one respect or another. But there doesn't seem to be much out there by way of competition either. It's vhs vs vhs
 
MQA that is undecoded because you don't have the right software/hardware or because it has been corrupted in the distribution is inferior to other formats, even CD let alone "HiRes"
 
MQA that is undecoded because you don't have the right software/hardware or because it has been corrupted in the distribution is inferior to other formats, even CD let alone "HiRes"

Not sure if it is inferior, at least from a sound quality point of view.

Using an MQA enabled DAC or Tidal's software decoding it certainly sounds better than redbook to my ears. And I guess that's how people will listen to MQA.

And how would MQA get corrupted in the distribution? Do you have an actual example of that happening?
 
And how would MQA get corrupted in the distribution? Do you have an actual example of that happening?

Probably too early for examples to manifest. But look at the history of HDCD to see what can happen.

MQA, like HDCD, relies on a key code sequence hidden in the lower bits of the encoded data. This is what the decoder looks for to detect "this is MQA". It also hides HF 'hints' scrambled as noise.

Consider what has happened with some HDCD material.

Having been HDCD encoded the result is handed to someone else who decides it needs to be 'improved' by some other 'audio guru'. They then tweak it *without* realising it is HDCD or decoding it to LPCM for the processing. The result may still have the peak compression applied by HDCD. But loses the hidden code that tells an HDCD decoder that this is (or was!) HDCD. Thus the peak compression isn't expanded.

The result now is neither an LPCM representation of what was recorded *nor* an HDCD one. Indeed, even applying 'blind' HDCD peak expansion may fail because the data has been altered.

The result is something worse all around in terms of fidelity to what was originally recorded.

At one time or another some 'professionals' in the audio biz have managed to screw up in almost every way imaginable. The more sliders, effects, formats, etc, you give them to play with, the more scope that have to do it.
 
Not sure if it is inferior, at least from a sound quality point of view.

Using an MQA enabled DAC or Tidal's software decoding it certainly sounds better than redbook to my ears. And I guess that's how people will listen to MQA.

And how would MQA get corrupted in the distribution? Do you have an actual example of that happening?
I specifically said undecoded, as would happen by listening to Tidal on a OS without the software decoder eg Android or Linux.

There have already been cases of MQA streams not playing properly, the recent Yes albums issue and some buggy settings problems with the Tidal app on some peoples PCs
 
Reviving this old thread.

I spent some time analysing the analogue output of an Explorer2 while playing 'three-LED' MQA files. Three LEDs lit means that the signal is of 4x rate or higher (i.e. 192k, 176.4k, or DXD rates).

I recorded with my Tascam DVRA1000 running at 192kHz. Theoretically this gives me a view of the MQA DAC's output up to 96kHz. However, the PCM1804 ADC in the Tascam has a large bump of sigma-delta modulator noise above 60kHz, so for payload ultrasonics to be discerned in the 60-96kHz band they, the ultrasonics, would have to be very very loud.

Normal music does not have high-level ultrasonics. In fact most music does not contain much above 30kHz. This makes analysis hard, because in order to spot the effect of upsampling in the signal chain you want to find pairs of matching frequencies symmetrically around Fs/2. Luckily many studios are a bit dirty, EMC-wise, and have spurious signal related to CRTs and switch mode supplies running through their cables. These signals make it onto recordings and then act as tell-take signs...

A-ha:

1eOQi5u.jpg


Spuriae mirror clearly around 48kHz. This is a 96k signal upsampled to 192k.

Eagles:

RLfob8z.jpg


Spuriae mirror clearly around 48kHz. This is a 96k signal upsampled to 192k.

Some jazz track, don't remember what (Bill Evans?)

ZkMCGbE.jpg


Same story. In addition, the master was heavily filtered while converting to 96k.

Roberta Flack:

5YPM4Eo.jpg


Spuriae mirror clearly around 48kHz. This is a 96k signal upsampled to 192k.

Television:

WCuXaoi.jpg


Idem.

2L, possibly the Mozart

v1mrNpR.jpg


Clean recording, no spuriae. All the 2L recordings I tried had very little treble content, too littly to allow searching for images. This was the best. Even so I had to eyeball this in real time (all other graphs were averaged), and freeze the spectral plot when the suspected image finally peaked above the noise bump. You can see that here at 26kHz and 70kHz.


In case you wonder ... all examples, except 2L, contain a strong pair at 29kHz/67kHz. This is not an artefact of my setup. I verified that the 29kHz component is present in the digital signal as it comes from Tidal.
 
Thanks for those results, etc. Interesting.

I have a Benchmark ADC that will record 192k/24, but I've not checked if it has any low-bit 'noise hill' at HF as I normally use it for 96k/24. I think my old tascam HDP2 has a noise hill, but I'd need to check.

The 29/67k pairs are curious. I presume the implication is that the 'hf hints' in the bitstacked 48k/24 MQA must generate the 29k? Maybe this is a sign of something like the 'key' sequence in the XORed LSBs? But then, why not in the 2L? Maybe a 'Tidal' watermark?
 


advertisement


Back
Top