advertisement


Bruno Putzeys on audio pricing

Sean Olive's most recent posting is very interesting. Apparently in double-blind level matched trials, college kids preferred the most accurate loudspeakers.

http://seanolive.blogspot.co.uk/2012/05/more-evidence-that-kids-even-japanese.html

But I guess that we should disregard these findings because the test was too blunt?

For me a mandatory purchase requirement is that I can listen to them for well over an hour without my ears getting tired. This makes blind tests for me almost a waste of time ... this (as I have mentioned before) led Coca Cola to develop their sweeter "new Coke" based on sips ... whereas when the tests were repeated with a full can the original formulation was much preferred.

In the recently reported tests that "proved" that musicians couldn't identify expensive antique violins blind, I have to say I am more inclined to trust the opinions of the top violinists than I am the scientific test results. I have also seen blind tastings of wines that I have drunk myself and totally disagreed with. When repeated the blind tests produced uncorrelated results. I think needledrops eliminate many of the criticisms of double blind tests, and I think it would be interesting if we see more use of them (but not on pfm for well known reasons).

Nic P
 
In any test aimed at differentiating two entities, a positive cannot be faked: it's always a positive. A negative result always raises questions over the test methodology.

For instance, if you intend to demonstrate the superiority of one camera lens over another, it's very easy to setup a test that makes everything look the same: use the wrong RAW processor, use the wrong image target, fail to lockup the mirror, shoot at the wrong aperture, use too high an ISO, omit sharpening . . . all will act to level differences.

But when the method is correct, wide variation is seen. In fact, meaningful differentiated data is the acid test of the methodology. So yes, when we see so many test results that claim so many negative results that are out of step with real-word situations, we should be skeptical about the test itself.

Even a blunt tool will sometimes generate positive results: you'd expect that: speakers are after all the most grossly differing components of the system: you'd have to be deaf, not 'blind' to notice that.
 
In any test aimed at differentiating two entities, a positive cannot be faked: it's always a positive.

False positives and negatives exist.

A negative result always raises questions over the test methodology.

Typically if there is a preconceived idea of one thing being better/worse than another.

So yes, when we see so many test results that claim so many negative results that are out of step with real-word situations, we should be skeptical about the test itself.

This would be something like a blind test showing no differences between a £250 amplifier and a £5000 amplifier, yet differences suddenly become apparent on sighted listening?
 
Sean Olive's most recent posting is very interesting. Apparently in double-blind level matched trials, college kids preferred the most accurate loudspeakers.

http://seanolive.blogspot.co.uk/2012/05/more-evidence-that-kids-even-japanese.html

But I guess that we should disregard these findings because the test was too blunt?

Interesting that he doesn't state which speaker is which, (unless I missed it), and exactly how they went about concluding which were the more accurate ones. Did they rely on manufacturers "specs" or did they actually measure the speakers "in situ"?

Not picking holes in the actual results. Assuming the statements regarding speaker accuracy are errm.. accurate.. I'm actually quite heartened by the results. That said, without access to the actual study it's just not possible to determine it's credibility and so it should be taken with a pinch of salt.
 
False positives and negatives exist.

Indeed, but not in this kind of test: a lens cannot fake better performance, but poor test methodology can easily make it look bad.

Similarly, if someone reliably hears more information or fewer artefacts with a given cable, and is able to consistently identify it 'blind', there is no possibility of it being a false positive. However, if the test subject fails to identify a difference, it always leaves open the possibility of a false negative - that the test conditions are faulty.
 
Interesting that he doesn't state which speaker is which, (unless I missed it), and exactly how they went about concluding which were the more accurate ones. Did they rely on manufacturers "specs" or did they actually measure the speakers "in situ"?

Methodologies are in video and PDF he links to: http://db.tt/eZ7HGbaw

Speakers were measured anechoically.

Not picking holes in the actual results. Assuming the statements regarding speaker accuracy are errm.. accurate.. I'm actually quite heartened by the results. That said, without access to the actual study it's just not possible to determine it's credibility and so it should be taken with a pinch of salt.

Indeed access to the AES paper would be ideal, but for me the take home message is that if a piece of equipment is measurably better in performance (assuming that those differences are accepted as being audible) it will be obvious in a level matched double-blind trial.
 
Similarly, if someone reliably hears more information or fewer artefacts with a given cable, and is able to consistently identify it 'blind', there is no possibility of it being a false positive. However, if the test subject fails to identify a difference, it always leaves open the possibility of a false negative - that the test conditions are faulty.

Or that no difference exists.
 
Similarly, if someone reliably hears more information or fewer artefacts with a given cable, and is able to consistently identify it 'blind', there is no possibility of it being a false positive. However, if the test subject fails to identify a difference, it always leaves open the possibility of a false negative - that the test conditions are faulty.

Slight change of topic; but that is an interesting "if". Can you reference any properly conducted blind test where such differences with a cable have been identified?

Tim
 
Slight change of topic; but that is an interesting "if". Can you reference any properly conducted blind test where such differences with a cable have been identified?

Tim

A properly constructed needledrop test would do this trivially. If I had the equipment and route to publish it I would have done so. Pfm now cannot publish it, and before then no one with the appropriate reputation of neutrality did so ... pity. IMO cable differences are clearly audible under such conditions. FWIW I also happen to think that esoteric cables are massively over-priced.

Nic P
 
This would be something like a blind test showing no differences between a £250 amplifier and a £5000 amplifier, yet differences suddenly become apparent on sighted listening?

Or quick A-B tests show no demonstrable difference between them, but when listening to a whole album side or CD, one gives pleasure but the other gives fatigue.
 
Or quick A-B tests show no demonstrable difference between them, but when listening to a whole album side or CD, one gives pleasure but the other gives fatigue.

I changed a Naim 250 in my main system for first a TEAD Linear A and then a Hovland Radia. Could I spot the differences 100% of the time in a blind test ... doubt it. Sue and I now listen to our system for about five times more than the 250. I think this supports your view.

Nic P
 
A properly constructed needledrop test would do this trivially. If I had the equipment and route to publish it I would have done so. Pfm now cannot publish it, and before then no one with the appropriate reputation of neutrality did so ... pity. IMO cable differences are clearly audible under such conditions. FWIW I also happen to think that esoteric cables are massively over-priced.

Nic P

you can do it easily as i have said before get a free soundcloud account and just point people to the downloads, no links required.
 
If we're testing for biology, we talk biology. If we're measuring change, we deal in measurements. If we're assessing perception, we (rather embarrassingly) have to discuss how things seem and feel.

The trouble is we're ashamed to voice such nebulous-seeming ideas as 'listener fatigue' or 'slight sense of unease' of 'something not quite right' that are actually reports from our powerful subconscious auditory data processing engine. In 'scientific' test circumstances, we might dismiss these vague 'sentiments' as inadmissable - but that is what a perception test is looking for.

Obviously, if a listener is experienced enough to translate those sense-impressions into 'Cable A has lower capacitance than Cable B' - great. But that's a big ask.

The 'character' of a component often takes a long time to get a handle on, and is only revealed by subtractive modeling from a system you're already familiar with. For me, rapid switching is just confusing. I'm much more interested in devising setups and tests that make it easy for listeners to hear consistent, characteristic differences than one designed to obfuscate them.

Again, we hit the issue of why a test is being conducted, as well as the problem of truly unobtrusive observation.
 
how can you possibly use music as a source to test perception....you just told everyone that everyone will perceive it as different.

use your brain.
 


advertisement


Back
Top