advertisement


Microphony III

Please read my post #554.

Yep, I did. You are incorrect. Noise is not consistent, the non synced clocks will make a difference.

The premise of comparing that to a wave file is flawed. All been discussed elsewhere.

Also the premise that time domain issues cannot be seen in the frequency domain is also incorrect. How is it we manage to measure fantastically small levels of clock jitter and phase noise in the frequency domain?
 
This is what they said they did:

Select a specific portion of a real music track.
• Copy this portion of the track into a PC. This becomes the reference data (WAV file).
• Play CD track in CD player, taking analogue outputs back into PC using a good quality
sound card.
• Compare the CD output WAV file with the reference data using a simple alignment and
subtraction method do produce a difference plot.
• Make a change to the accessory fit (ie mains lead), whilst leaving the rest of the system
untouched, and repeat with the same track.
• Again compare the CD output WAV file with the reference data to produce a second
difference plot.
• Compare the before and after difference plots.

What's wrong with that? They have compared the difference with and without the cables, supports etc.. That's what we want to see.
 
presenting information to laymen, not scientists, so there has to be some dumming down.

Correct practice dumbed down still looks correct.

The Hydrogenaudio thread was mainly smug people just laughing

That's HA for you. Not a very pleasant place. But it has its use, and some of the (less opinionated) posters are certified smart cookies.


, only the last post by Werner having some substance but that is mostly guess work.

You are not exactly in a position to know when I am guessing, right?

When such an attempt arrives they are dismissed in one or two sweeping statements.

But then the flaws were such that the sweeping was laughably easy.

Apart from the disingenious trick of making each and every result graph that appeared in print or on the web illegible (or do you think that a coincidence), there were:

1) the direct comparison of unreconstructed ripped data with a re-played, re-recorded version of the same. Ah! There are differences! Of course there are differences. Up to 100% if you want. This just tells how clueless they were w.r.t. the sampling theorem.

2) after a couple of years (I seem to remember, might have been earlier), they suddenly got the insight that proper sub-sample timing alignment was necessary for this sort of comparisons. So they developed a method, and botched it. The irony: AudioDiffMaker does just this job, and had been in existence for a couple of years by then. (Not that it is perfect.) But this does not matter: the unreconstructed issue was still there, making any effort utterly worthless.


In summary: they used a broken method, a method which by its very flaws guaranteed that significant before/after differences would be found. They took these false differences, put them in illegible graphs, and made a dog and pony show out of it to wow audiophiles, while throwing references to the MoD in all of their marketing literature.

And then they went silent...




Das war nicht nur nicht richtig, es war nicht einmal falsch!
 
As the conditions are the same with and without the accessories why am I wrong?

You are wrong. Did you not read, or did you not understand? read above again and read werners comments.

I do find it pretty amusing that even after all the technical destruction of the tests by many parties elsewhere, you seem so desperate to cling to some validity of the test.
 
It has already been explained. I keep referring you to the texts elsewhere as to why.

I suggest you go away and do some reading and develop some understanding. Why would I waste my time and clutter this thread with arguing with you?

its pretty clear you are desperate to believe a certain outcome. The technical arguing has already been done elsewhere. I have no need to re-hash.

Where elsewhere?
 
OK.

How should they have done it?

Good question. Ask a few questions first.

Why did they want to compare to a wave file when its not representative of the output?

Why did they not expose the equipment to a known vibration source and turn it on and off?

Why did they not do this with and without any of their "special" kit in the system?

Why did they use a sound card instead of professional instrumentation?

Why did they measure with speakers connected?

Why did they measure through a CD and amp?

Why did they not have the word clocks synced?



There are bound to be variable differences with or without vibration. The point is they have created so many inconsistent variables with an absolute dogs dinner of a test design.

It demonstrates either ignorance of the technicalities, or a deliberate wish to bullshit people.

..
 
So you are implying that the measurements with the accessories showed less errors vs the WAV file by chance? :)

Chance. Or cherry picking. Yes, that is a grave accusation.

At any rate, the very few quantitative details that could be gleaned from the graphs suggested error magnitudes, even in the best case, of such an order as to beggar any belief.

That's a great quote.

Pauli. You know, the bloke who explains why some sodium vapor lamps make orange light, and others near-white light.


Could you explain this further please?

The ripped data they use as a reference contains unreconstructed sample data points.

Upon replay these points get reconstructed in the, erm, reconstruction (oversampling, anti-imaging, whatever) filter, resulting in a smooth curve in the analogue domain. This signal is then sampled by their measurement ADC. The replay and record clocks are not related, so they feature mutual temporal offset, drift, and jitter.

In other words, even with a perfect/ideal replay DAC and a perfect/ideal record ADC the chances of hitting the very same data points, and keeping hitting them, are zero.
So whatever comes out of the ADC must not ever be compared to the ripped unreconstructed data. Such comparison is meaningless.


How should they have done it?

Such difference tests are very hard to do properly. This helps:

-use only reconstructed signal as reference

-lock the playback and record clocks together

-assess any impact of anti-imaging and anti-aliasing filters, and move them out of harm's way, if possible

-characterise the jitter for the entire loop. Learn from this what sort of differences to expect even when nothing changes

-test the entire setup for known zero-difference cases and for known big-difference cases. Is it reliable?
 
I suggest that you read the extensive threads discussing it- Werner posted the links.
Iirc one of the many flaws in the experiment was that they tried to prove their conclusion using a "measurement" which was dimensionally meaningless - a sort of diagonal where x and y where in unrelated dimensions (the result would change depending on how you chose the units). I seem to remember SQ
posting on that point.
But it's years ago, memory is dim, there were many flaws, and it was all extensively debated at the time.

Why didnt they just show the voltage difference against time?:confused::confused::confused:
 
Yes, we've read the later blurbs. If anything, they made their pit deeper. The really last thing they wrote on the subject is on the website:

At the height of this collaboration, during 2010 and 2011, presentations on this subject were given at audio shows both in the UK and in the US, and whilst this generated a good level of interest in this work, it didn’t produce a source of revenue to fund this work. And so, for a while, both Steve and Gareth had to back away from this work in order to focus on activities within Vertex AQ and Acuity Products that generated income into their individual companies.

But 2014 could be just the year to start to move things forward once again??


If you're really interested read back all that was posted on the various serious audio forums in the relevant period and try to understand it. It's all in there, really no need to rehash it here. Let it rest.

At least until the clowns re-emerge for an encore.

I.e. sales did not increase as a result of the work. They found themselves having to actually fund their own research, and they choked.
 
1) the direct comparison of unreconstructed ripped data with a re-played, re-recorded version of the same. Ah! There are differences! Of course there are differences. Up to 100% if you want. This just tells how clueless they were w.r.t. the sampling theorem.

2) after a couple of years (I seem to remember, might have been earlier), they suddenly got the insight that proper sub-sample timing alignment was necessary for this sort of comparisons. So they developed a method, and botched it. The irony: AudioDiffMaker does just this job, and had been in existence for a couple of years by then. (Not that it is perfect.) But this does not matter: the unreconstructed issue was still there, making any effort utterly worthless.


In summary: they used a broken method, a method which by its very flaws guaranteed that significant before/after differences would be found. They took these false differences, put them in illegible graphs, and made a dog and pony show out of it to wow audiophiles, while throwing references to the MoD in all of their marketing literature.

And then they went silent...




Das war nicht nur nicht richtig, es war nicht einmal falsch!
Thanks for clarifying my layman's suspicions.
 
Earlier I mentioned the problem with measuring with the speakers connected.

Here are two plots taken from the output of my TAG power amp. Please excuse the mains pick up, dont worry its not of any relevance.(note to self - bring home the shielded twisted pair cable)

The amp is on with no signal going in. First is with silence in the room, second is with music playing out of the other speaker at about 80 dB(A). BTW there is no crosstalk between the power amp channels, they are 5 totally seperate amps in the same case with seperate transformers, power supplies and grounds.

The silent speaker is acting as a microphone and screwing up the measurement.

Tag%20power%20amp%20out%20no%20noise_zpsxjuzmcac.jpg


Tag%20power%20amp%20out%20music_zps8xtygnzz.jpg
 
BTW there is no crosstalk between the power amp channels,

Did you verify that, because ...

The silent speaker is acting as a microphone and screwing up the measurement.

I find that a bit hard to believe. I believe the TAG is a regular high-feedback transistor amp, with a very very low output impedance, especially at low frequencies. This should make it very hard to impress an external signal on the amp's outputs.

The speaker, on the other hand, is a very inefficient transducer, hobbled by a large output impedance.
 
Did you verify that, because ...



I find that a bit hard to believe. I believe the TAG is a regular high-feedback transistor amp, with a very very low output impedance, especially at low frequencies. This should make it very hard to impress an external signal on the amp's outputs.

The speaker, on the other hand, is a very inefficient transducer, hobbled by a large output impedance.


I found it hard to believe too, but none the less.

I was going to measure the power amps potential for microphony. I dont have a non inductive dummy load to hand so I decided to measure with the speaker attached and thats what happened.

There is no potential for cross talk. the amps are physically and electrically isolated. Its also not there with the speaker disconnected. The other amp was being fed from a battery powered laptop.

Im going to repeat and see if I can find any erroneous cause.
 
The signal is below the hum, so very small indeed. OTOH crosstalk seems more likely.

Paul

The hum is not from the amp :) We were looking right in the noise floor for microphony in the DAC and pre amp, levels considerably lower than this.
 
There is no potential for cross talk.

Never say never.

The high induced hum in the upper trace is cause for concern. It may well be that curing it also cures the speaker-induced signal in the lower trace.

Was the silent amp's input open or shorted?
 


advertisement


Back
Top