advertisement


MQA beginning to see the light?

That is not quite correct. The output of full MQA is a quad rate signal, or perhaps even higher. The higher the input signal rate, the less processing (i.e. oversampling) a DAC chip does. If for instance oversampling with reconstruction filtering to 8x rate has been done externally, then the DAC itself won't do any real oversampling with filtering anymore, all it has left to do are the final (and often crude) oversampling stages as part of the delta sigma modulation. But it won't touch what it sees as the baseband signal.

When I mentioned the non-MQA DAC degrading the audio I was thinking of two things:

1) The statement from Paul from PS Audio where they found that the test MQA decoder when attached to their DAC produced inferior sound (not sure compared to what though?)

2) One of the patents mentions the possibility of the (non MQA) DAC degrading the sound by its own DSP.
 
Slight update:

I've now extended my bitfreezing program to permit freezing up to 12 bits per sample. As a test I did a 10 bit bitfreeze on the Magnificant 192k/24 file and the result when 'flac -8' ed came out at 46 million bytes. i.e. smaller than the MQA 48k/24 flac. This is pretty much what I'd expected.

I did a sample-by-sample 'diff' and the result is inaudibly low. Neither by ear or by spectral analysis does it show any signs of being modulated by the music. The implication is that the bitfreezing hasn't made any audible change to the result.

Since the process doesn't involve any downsampling, aliasing, or any 'selection' of what HF matters, it seems a simpler method than MQA as well as giving a smaller result.

And I'd suspect that starting with higher sample rate sources you could freeze *more* bits per sample because the total process noise would tend to be higher.
 
You are missing the point (although I agree with you in principle). When it comes to music enjoyment, I care not whether it sounds better because of better master, better lossy compression, better secret sauce, etc. Only that it actually sounds better. And to my ears, the same test tracks on Tidal HiFi sounded way better than the Spotify ones. It was consistent across all the test tracks, although more noticeable for certain tracks.
Well that may be the case, but as regards format choice beyond 16/44 (or frankly above 256 AAC) it remains live issue as to whether there are any improvements to be had. Consistent low end distortion does not sound likely from any sensible codec and it certainly doesn't come from 16/44. Whatever hi Rez might just give us, it probably won't be that.
 
One thought about a MQA decoder with raw HiRes LPCM output is that this is NOT the precious master, thanks to the origami aliasing all over the HF. It may well sound better at first listen than the master thanks to the digital artifacts
 
Yes. On of the 'issues' that hovers over MQA for me is the extent to which aliasing or other artifacts may "add salt to the egg" as a replacement for any original "loss of flavour". You'd need to compare with an unaffected original to tell. But if it is added salt then its an "effect" not actual fidelity in engineering terms. And adding salt to *everything* might end up being a bad idea.

At this stage its impossible to tell, of course.

BTW I've now linked up the MQA pages I've written thus far. Plan another one on these sorts of general issues, but not had the 'round tuit' arrive yet. :)
 
I've been reading some of the journal papers, etc, that relate to MQA. I noticed something that seems an interesting 'aside'.

At the start of section 3.2 of AES Convention Paper 9178 (LA Oct 2014) there is a graph showing spectral noise floors for a variety of recordings, old and new, up to 96kHz. They virtually all give total noise levels well above 16bit. So bitfreezing or simply carefully resampling down to 16bit rather than 24bit would generally mean smaller files that contain the same musical info.
 
I've been reading some of the journal papers, etc, that relate to MQA. I noticed something that seems an interesting 'aside'.

At the start of section 3.2 of AES Convention Paper 9178 (LA Oct 2014) there is a graph showing spectral noise floors for a variety of recordings, old and new, up to 96kHz. They virtually all give total noise levels well above 16bit. So bitfreezing or simply carefully resampling down to 16bit rather than 24bit would generally mean smaller files that contain the same musical info.
Yup. Sums it up, doesn't it? Why bother with all this bit-stacking, alias-packing flim flam?
 
Interesting read, so the paper is also saying that 192k and 384k is a complete waste of time for nearly all source.
I do get nervous when Kunder is referenced. That 7us is abused so often, with too many people misled into thinking 192k is required to better 7us resolution
 
It makes sense to have a sample rate that gives some headroom between the highest components of the music and the Nyqiust limit. If nothing else, it allows for simpler filtering, etc. You also have to make some allowance for the *averaged* spectra not showing up rare transient events. The averaged result tells you to total amount of information, but not how to faithfully preserve it.

But I've never really understood the focus on 24bit. I have wondered if it arose for two reasons.

1) That people think in terms of bytes, so simply went from 2 to 3 bytes per sample. Decision habituated by the habits of computing in recent decades which have largely forgotten systems that *didn't* base on multiples of 8 bits for words.

2) That people have no real understanding of how LPCM works when it comes to methods like noise shaping. So don't understand how you can get audible resolution and dynamic range (e.g. low noise floor) much higher than the bald value implied by 2^16.

Curious, really given the enthusiam of some for DSD, and a classic examplar for high rate *low* bit depth. Although personally I think it goes too far and then encounters problems of its own. But that's another story... :)

After that, marketing may come into it. People flocked to buy 'high power' vacuum cleaners when told they were about to be 'banned' (they weren't). Some makers has a great time selling cleaners simply because they were inefficiently designed and so used a lot of power to do no better than other cleaners that drew less power.

Maybe it's "bigger = better" marketing.

I can see the point of 96k/24 or higher when *recording* as it gives more 'space' for avoiding losses or mistakes. But once you *have* the recording I end up agreeing with Bob Stuart - that you don't really need more than 96k/16 - well produced.

The snag here is thus the usual one. That the result depends on the care and skill of those making recordings and 'mastering' what gets distributed.

When I get a chance I'll add a webpage that gives some simple examples of 'C'-type code to make clear how anyone can write a bit freezing program. And add links to some of the example programs. But I'd hope that by now the basic idea and method is pretty clear to anyone interested who fancies writing a program to do this.

BTW I can get to the AES papers, etc. But not some of the other references. So I may ask for help with finding some of those.
 
Did you mean "Kunchur"? I've only had a chance to look at a couple of his papers and I can't see they provided much basis for needing a wide bandwidth. Too many simpler alternative explanations for the results. However I may well have missed a key result in a paper I've not seen.

As with 24bit and noise shaping, I fear that many people may not also realise that the ability to represent timing *changes* in extended waveforms is a lot sharper than the sample interval for LPCM. Any problems here are likely to be someplace else in the chain.
 
Did you mean "Kunchur"? I've only had a chance to look at a couple of his papers and I can't see they provided much basis for needing a wide bandwidth. Too many simpler alternative explanations for the results. However I may well have missed a key result in a paper I've not seen.

As with 24bit and noise shaping, I fear that many people may not also realise that the ability to represent timing *changes* in extended waveforms is a lot sharper than the sample interval for LPCM. Any problems here are likely to be someplace else in the chain.
You are right of course. The first person to misunderstand the significance o Dr kunchur's results was of course Dr Kunchur Anyway, in this week of all weeks, one needs to bear in mind the relative insignificance of what something means if you have an idea what you are talking about, compared with he overwhelming importance of what something sounds like it might mean if you really want it to mean that.
 


advertisement


Back
Top