advertisement


MQA

Status
Not open for further replies.
by “not at all” you mean - “yes, absolutely” your opinion here is (in your very own words) useless outside your own listening experience (and preference)?
If you want to pretend to misunderstand, that's absolutely your right.
 
2 things in favour of MQA not up for debate:

It sounds better.
It does not make your amp explode or oscillate because - unlike lossless hi res FLAC - lossy hi res MQA filters away bad high frequencies and adds good high frequencies + equalisation.
 
If you want to pretend to misunderstand, that's absolutely your right.

no pretence

here you go so I do not have to:

Or I accept that you have an opinion, which is like a particular body part, that everyone has.
However, if you don't have knowledge, either theoretical or empirical, your opinion is definitionally uninformed.
.... These opinions are essentially useless for the purpose of advancing understanding of the subject matter.


So please - you choose which areas are your views “useless” in based on what you said you have - your listening experience?
 
2 things in favour of MQA not up for debate:

It sounds better.
It does not make your amp explode or oscillate because - unlike lossless hi res FLAC - lossy hi res MQA filters away bad high frequencies and adds good high frequencies + equalisation.
You do understand that some amplifiers don't like high amplitude ultrasound at high power?

Design point is a couple of Hz to a few tens of thousands, maybe a bit more, but at low volume at ultrasonic frequencies. High power at 100+ KHz can be be an issue for some designs.

I was frankly unpleasantly surprised that DXD does this on purpose.
 
Last edited:
no pretence

here you go so I do not have to:

Or I accept that you have an opinion, which is like a particular body part, that everyone has.
However, if you don't have knowledge, either theoretical or empirical, your opinion is definitionally uninformed.
.... These opinions are essentially useless for the purpose of advancing understanding of the subject matter.


So please - you choose which area are your views “useless” in?
Of course.

I am next to useless in the area of signal processing - barely enough to be dangerous - most of my dynamics experience is with analyses of mechanical systems, where the mathematics are simple and "boiler plate."

That's why I have been asking others to use their skills to move the technical side forward.

I can only offer empirical understanding.
 
the area of signal processing

great, well this makes two of us, at least.

So one area where your view is useful (the listening experience) and one tiny area, which btw incidentally underpins most here, where you say it’s “useless” - how about the other several hundred posts? why again would your view be any more precious than anyone else’s here outside your own listening preference? and why would you not accept my opinion. What is unfair?
 
great, well this makes two of us, at least.

So one area where your view is useful (the listening experience) and one where you say it’s “useless” - how about the other several hundred posts? why again would your view be any more precious than anyone else’s here outside your own listening preference? and why would you not accept my opinion. What is unfair?
I try not to post on technical matters other than encourage those with correct skillset to excercise it.

I accept the fact that you have an opinion. However, you don't appear to have technical knowledge, and you are not an MQA user (correct me if I am wrong).

If above is true, your opinion isn't backed up by knowledge. So, while it may seem important to you it isn't relevant to others.
 
Last edited:
If above is true, your opinion isn't backed up by knowledge. So, while it may seem important to you it isn't relevant to others.

just like yours then - ok, outside your listening experience if I stay with what you’ve told me. And even your listening preference is only your own, nothing more, so no one should necessarily agree with it? or is that unfair too?
leaving you to reflect upon the beauty of acceptance of “other” views just like yours!
have a good evening.
 
just like yours then - ok, outside your listening experience if I stay with what you’ve told me. And even your listening preference is only your own, nothing more, so no one should necessarily agree with it? or is that unfair too?
leaving you to reflect upon the beauty of acceptance of “other” views just like yours!
have a good evening.
It's not unfair. But it is short sighted.

Like anything in subjective observations, they become more and more relevant as more people accrue experience and share it with others. It's how we, as a community, learn together. It is an important and organic process that I suspect is common to all social animals.

Some will refuse to learn, because they think they already know. But that's a logical oxymoron.

That is why I believe it is a worthy goal to have more people hear MQA without specialized equipment (though now it costs *only* $150 for IFI ZEN DAC, less than some of us spend on wire), so that we can compare notes and discuss. As more listening information becomes available, a fuller picture of system's performance will emerge and hopefully a community consensus can be formed. And that experiential consensus can certainly be negative - I am certainly want to learn and grow. But today, we are certainly at the very beginning of this process.

Understanding complex things is, well, a complex endeavor. It took a long time for us to understand tube and solid state amplification differences, for example.

Might as well start now with MQA. Actually, the intense acrymony over this format has already put us behind schedule by several years, unfortunately.

But today is as good day as any to get back on track.
 
Last edited:
  • Like
Reactions: RoA
Indeed you assume correctly in this case. This is a position I have held for twenty five years. You have read my post and understand why.

Best wishes from George
George,

Your position may be wise.

For a couple of decades our community has been on a *"higher sampling rate is better" quest.

This now seems somewhat misguided.

There is no musically correlated information beyond 30-35 KHz. So the highest playback rate of 88 and 96 KHz seem more than sufficient. My system will reproduce up to 30-35 KHz...though I can't claim to hear that high.

Keeping the production chain at higher rates may be mathematically advantageous, to allow for headroom in data manipulation, but this is strictly data integrity (not musical) concerns. In my line of business, I upsample test acceleration data in UERD when I integrate to velocity and displacement. Just trying to keep the data clean.

However, it seems it's may be detrimental to actually record at these very high sampling rates. As you point out, microphones aren't capable of working at high ultrasonics. All one gets is recorded junk. Which is further exarcebated by ADC digital noise shaping, which relocates audio band noise to ultrasound. Not a pretty picture, once you see the DXD FFT...

Food for thought, for sure. Thank you for bringing this up to community's attention. Several good points made. I will give CD resolution another serious listen.

For what it's worth MQA seems to be keenly aware of this issue (Bob Stuart is by all accounts technically brilliant). They do attempt to extract musically relevant data out of a hires recording (one can see this visually in the ASR video, so all one needs is well written code). My sense (how I would do it if I was mathematically gifted) is to FFT the recording and to identify the high frequency point where there is still musically correlated high frequency information change (to some limit). I would then discard all data above that frequency. Since they spent some time at it, I would expect them to do this dynamically, as a function of incoming signal. I would also expect a sofisticated understand of "musically correlated."

That's the high frequency (X axis) point of the musical information FFT triangle. The leftmost low frequency point is how low this particular recording goes in frequency. The Y coordinate of those two points in the FFT triangle is the noise floor of the recording plus some Db below, as we can hear below the noise floor. The upper apex of the triangle is the dynamic range of the recording - Y-axis maxima with some range for error. Again, I expect this parameter to be dynamically variable.

MQA claims 2000 variations of their encoder parameters. I expect this to be be variations in three points of the FFT "music information triangle." Fairly standard information theory implementation. I accept they do this dynamically.

What MQA attempts to do data-wise is to bring some common sense data compaction/compression/rejection to the rediculously noisy and musically irrelevant parts of hires files. What they do is much, much, much less invasive than what happens in the video world - and what we seemingly accept.

They were certainly terrible at explaining it. I have been listening to it for years but it took me until now to figure out what they do, data-wise. Stereophile and ASR were helpful in my particular situation. An FFT music information triangle is a key idea here. For that make sense, you have to be comfortable being in the frequency domain.

In my case I have been professionally engaged in a project that involves a physical system that transitions from linear dynamics (follows input) at lower frequencies to highly nonlinear dynamics (doesn't follow input) at higher frequencies. As part of my work I had to transition from PSD (stochastic, non-input specific) to transient definition (non-stochastic, input specific) and back. And, for those reading and concerned, I understand the difference and potential pitfalls. JimA and mansr should feel free to comment and criticize. It's an area of engineering that is actively debated - what I do isn't "boiler plate."

Humans have no evolutionary intuition in the frequency domain. It surprised me that I quickly developed an understanding between the frequency and time domains. In my particular system I can now *see* frequency response in the impact impulse time domain response.

That's why I think time domain comparisons between MQA (decoded to LPCM with mansr's software decoder) with baseline LPCM would be super-informative.

My goodness. You got me to write a post equally long to yours. I honestly didn't think this was possible.

*This post contains an aerospace engineer's understanding of signal processing, which maybe incorrect/incomplete. Signal processing isn't rocket science. :)
 
Last edited:
  • Like
Reactions: RoA
Of course.

I am next to useless in the area of signal processing - barely enough to be dangerous - most of my dynamics experience is with analyses of mechanical systems, where the mathematics are simple and "boiler plate."

That's why I have been asking others to use their skills to move the technical side forward.

I can only offer empirical understanding.

That's fair enough. I can briefly add in case it helps others:

Being able to do signal analysis, etc, was a key part of my professional work for decades. Applied to various areas. I only worked briefly in Hi-Fi. People can see examples of most of the areas where I applied this on my 'biography' webpages. However some other areas are skated over that were for MoD/NATO related tasks. (Although I did get a report published in Nature about a cleaned version of an example, and also one in New Scientist. :) )

Possibly relevant here is that work involved steganography, combat comms/ID, and 'unscrambling eggs'. I was also for some years an "Old Crow" which some USA engineers may recognise. :)

I'm not posting much here on the forum at present as looking into MQA is quite interesting and I'm using various methods on various files. The 2L ones seem to have some curious properties...

The work is slow because my habit is to DIY my own computer programs so I can know what they do, and can be keyed to go for what seems of interest. Where one result may prompt a new approach to find out more. Also because generating decent graphics and a clear write-up explaining takes time.

I'll report when I have some results. FWIW the programs I've done in the past can generally be found via my webpages, with source code in 'C'. That allows others to check and improve the code if they so wish.

Slainte. :)
 
Like anything in subjective observations, they become more and more relevant as more people accrue experience and share it with others. It's how we, as a community, learn together. It is an important and organic process that I suspect is common to all social animals.

Some will refuse to learn, because they think they already know. But that's a logical oxymoron.

Great, I do agree, and now that we’ve agreed on that, we only need to add that having a view on what is done and how it sounds , as well as the ethics of business (yes, there is such a thing), and also how it’s done might be just as important to the community you mention, and in just the same way you describe above! And Your view here is just as valid as anyone’s .. but no more! And let’s face it, looks like the community won’t take the BS part of it! And what is there to not accept as an organic view?

and btw, I have listened to MQA, for a long while, on various gear, that’s part of why I am not any longer, but not only, although on some tracks it sounds good. Hence, I like the idea of a tone control at the most, or a version - but no more and not the only one, and no ketchup on everything please. The point again is, though, that the SQ is far from the whole story. And you can only agree here that it also seems to be the community organic view! cheers
 
You do understand that some amplifiers don't like high amplitude ultrasound at high power?

Design point is a couple of Hz to a few tens of thousands, maybe a bit more, but at low volume at ultrasonic frequencies. High power at 100+ KHz can be be an issue for some designs.

I was frankly unpleasantly surprised that DXD does this on purpose.
Have you completely missed the part where MQA (after decoding and "rendering") is bathing in ultrasonic junk, far more than on any DXD recording? On purpose.
 
They do attempt to extract musically relevant data out of a hires recording (one can see this visually in the ASR video, so all one needs is well written code). My sense (how I would do it if I was mathematically gifted) is to FFT the recording and to identify the high frequency point where there is still musically correlated high frequency information change (to some limit). I would then discard all data above that frequency. Since they spent some time at it, I would expect them to do this dynamically, as a function of incoming signal. I would also expect a sofisticated understand of "musically correlated."....
What MQA attempts to do data-wise is to bring some common sense data compaction/compression/rejection to the rediculously noisy and musically irrelevant parts of hires files. What they do is much, much, much less invasive than what happens in the video world - and what we seemingly accept.

They were certainly terrible at explaining it. I have been listening to it for years but it took me until now to figure out what they do, data-wise. Stereophile and ASR were helpful in my particular situation. An FFT music information triangle is a key idea here. For that make sense, you have to be comfortable being in the frequency domain.

Have you completely missed the part where MQA (after decoding and "rendering") is bathing in ultrasonic junk, far more than on any DXD recording? On purpose.
Well apparently. After all Dmitri claims to have just realised that MQA's schtick might have somethign to do with a triangle of frequency/bit depth in which music lives.
We all know this was in the orginal white papers and much discussed.
Obviously Dmitri's latest gambit isn't to be taken seriously. He can't possibly not know the issue with what Jim calls "lazy" downsampling. He can't possibly not know that it is preposterous to advance MQA on the basis that you don't need higher sample rates.
It's just a long boring game.
 
Well apparently. After all Dmitri claims to have just realised that MQA's schtick might have somethign to do with a triangle of frequency/bit depth in which music lives.
We all know this was in the orginal white papers and much discussed.
Obviously Dmitri's latest gambit isn't to be taken seriously. He can't possibly not know the issue with what Jim calls "lazy" downsampling. He can't possibly not know that it is preposterous to advance MQA on the basis that you don't need higher sample rates.
It's just a long boring game.

The 'data triangle' (not very well described by DZ) realisation is what made me suggest the simple use of noise shaping (and bit freexing) *years* ago. This makes FLAC work better for high res as it no longer needs to specify noise with far more bits than actually required. Thus it can focus down on the real information payload.

The 2L files do vary from case to case, and from GO's files. But the general rule is that real music often has very little (or nothing) in the way of content above 22kHz. Thus to cope well you can avoid methods that add in deterministic changes (aka distortions) and just noise shape. 96k/16 noise shaped is quite well suited to the human music hearing 'triangle'. And no-one would need new kit or pay extra. Nor need to argue over the clutter that MQA may add and then not totally remove.

None of that is new, and the file examinations thus far don't seem to contradict with it. But I just wrote this before lunch and then getting back to some cross-correlations... :)
 
48/24 capture, 48/32 processing and 48/16 final distribution is the sweet spot, push the noise shaping just away from the audible, while not needing vertical brick wall filtering.
It is important to make the output file truly 15 or 16 bit true resolution
I wonder what the true bit depth of many CDs actually is, especially the earlier ones with poor ADCs
 
The 'data triangle' (not very well described by DZ) realisation is what made me suggest the simple use of noise shaping (and bit freexing) *years* ago. This makes FLAC work better for high res as it no longer needs to specify noise with far more bits than actually required. Thus it can focus down on the real information payload.

The 2L files do vary from case to case, and from GO's files. But the general rule is that real music often has very little (or nothing) in the way of content above 22kHz. Thus to cope well you can avoid methods that add in deterministic changes (aka distortions) and just noise shape. 96k/16 noise shaped is quite well suited to the human music hearing 'triangle'. And no-one would need new kit or pay extra. Nor need to argue over the clutter that MQA may add and then not totally remove.
:)
Thanks Jim. That seemed to be something of a consensus vew some time ago. I look forward to seeing the fruits of your latest researches.
 
On the topic of ultrasonic junk, this is easy to show with measurements of an actual DAC. This is the spectrum of a Dragonfly Cobalt playing pink noise at 96 kHz:
image.png


This is the spectrum when playing pink noise flagged as MQA:
image.png


Who can spot the difference?
 
Status
Not open for further replies.


advertisement


Back
Top