advertisement


MCRU music server?

Indeed, because it removes bias. Can't have that.

The test is a bias. It's not a drug trial. Blind perception tests (spot the oxymoron?) 'demonstrate' anything is the same as anything else: they're a parlour trick rather like levitation.
 
The test is a bias. It's not a drug trial. Blind perception tests (spot the oxymoron?) 'demonstrate' anything is the same as anything else: they're a parlour trick rather like levitation.

And what is your suggested method for testing whether you can hear a difference or not - without being influenced by perceptual and psychological bias?
 
The test is a bias. It's not a drug trial. Blind perception tests (spot the oxymoron?) 'demonstrate' anything is the same as anything else: they're a parlour trick rather like levitation.
usual arrant nonsense. The fear is nascent: properly conducted blind tests would kill your business.

Whatever happened to the blind tests you were going to organise, itey? The horrible truth nipped them in the bud?
 
And what is your suggested method for testing whether you can hear a difference or not - without being influenced by perceptual and psychological bias?

Dunno. There are no simple answers; no perfect tests, no 'one size fits all'. Otherwise there would be no disagreement.

But on balance, I tend to believe a sighted audition is the lesser of the two evils. Pragmatically, it's also the only 'test' that matters. We don't listen to our record collections without knowing how they're being reproduced. That would rather fundamentally change how we listen - and therefore what we hear - wouldn't it?

This anecdote is interesting:

“ . . . listening tests conducted by Swedish Radio (analogous to the BBC) [were] to decide whether one of the low-bit-rate codecs under consideration by the European Broadcast Union was good enough to replace FM broadcasting in Europe.

Swedish Radio developed an elaborate listening methodology called “double-blind, triple-stimulus, hidden-reference.” A “subject” (listener) would hear three “objects” (musical presentations); presentation A was always the unprocessed signal, with the listener required to identify if presentation B or C had been processed through the codec.

The test involved 60 “expert” listeners spanning 20,000 evaluations over a period of two years. Swedish Radio announced in 1991 that it had narrowed the field to two codecs, and that “both codecs have now reached a level of performance where they fulfill the EBU requirements for a distribution codec.” In other words, Swedish Radio said the codec was good enough to replace analog FM broadcasts in Europe. This decision was based on data gathered during the 20,000 “double-blind, triple-stimulus, hidden-reference” listening trials. (The listening-test methodology and statistical analysis are documented in detail in “Subjective Assessments on Low Bit-Rate Audio Codecs,” by C. Grewin and T. Rydén, published in the proceedings of the 10th International Audio Engineering Society Conference, “Images of Audio.”)

After announcing its decision, Swedish Radio sent a tape of music processed by the selected codec to the late Bart Locanthi, an acknowledged expert in digital audio and chairman of an ad hoc committee formed to independently evaluate low-bit rate codecs. Using the same non-blind observational-listening techniques that audiophiles routinely use to evaluate sound quality, Locanthi instantly identified an artifact of the codec. After Locanthi informed Swedish Radio of the artifact (an idle tone at 1.5kHz), listeners at Swedish Radio also instantly heard the distortion. (Locanthi’s account of the episode is documented in an audio recording played at workshop on low-bit-rate codecs at the 91st AES convention.)

How is it possible that a single listener, using non-blind observational listening techniques, was able to discover—in less than ten minutes—a distortion that escaped the scrutiny of 60 expert listeners, 20,000 trials conducted over a two-year period, and elaborate “double-blind, triple-stimulus, hidden-reference” methodology, and sophisticated statistical analysis?

The answer is that blind listening tests fundamentally distort the listening process and are worthless in determining the audibility of a certain phenomenon.”


Source: http://www.avguide.com/forums/blind-listening-tests-are-flawed-editorial . . . but they would say that, wouldn't they?
 
usual arrant nonsense. The fear is nascent: properly conducted blind tests would kill your business.

Whatever happened to the blind tests you were going to organise, itey? The horrible truth nipped them in the bud?

“The fear is nascent?!” Literally, what?!

Blind testing wine hasn't killed the wine industry. Blind testing violins hasn't altered the value of a Stradivarius.

The audio business - reviewers, dealers, customers et al, already uses blind testing and it hasn't - and won't - affect what people buy. Only gross differences - and only when ajudicated by very acute listeners - are perceptible in such tests.

We might as well discuss amplifiers, which also tend to measure very similarly and can't generally be told apart blind. Which is it to be? All a scam, or none of it?

I'm still happy for Steven to host the bake off/blind test: have volunteered kit.
 
Poor analogy, itey, I've participated in quite a few blind tests in the wine industry. You know why they do blind tests??

Not many places left to hide, huh?
 
Wine illustrates the overlap between blindness and blind testing.

In many individuals taste, like hearing, is enhanced when the faculty of sight is disabled. And not knowing what you're drinking, of course, removes expectation bias and all that shiraz.

The fact is we've all read hundreds of lab reviews of transports, DACs, preamps and amplifiers that measure near-as-dammit identically. And yet we still buy expensive ones. Either people are really stupid, or that last 0.002% distortion isn't actually what makes an amplifier reproduce a piano less or more realistically within the parameters of a given system.

You won't stop people listening to their audio equipment.
 
You really have lost it this evening, itey, sorry to say. I'm going to watch (apparently) a German film set in Venice and dubbed into French. Can I suggest you head down to the local and chuck some darts at a board? Much easier to understand.
 
How is it possible that a single listener, using non-blind observational listening techniques, was able to discover—in less than ten minutes—a distortion that escaped the scrutiny of 60 expert listeners, 20,000 trials conducted over a two-year period, and elaborate “double-blind, triple-stimulus, hidden-reference” methodology, and sophisticated statistical analysis?

A good question

The answer is that blind listening tests fundamentally distort the listening process and are worthless in determining the audibility of a certain phenomenon.”

An odd conclusion, unsupported by argument.

It seems to me that it is easier to demonstrate the fallibility of sighted tests than it is the fallibility of blind tests.

One trick I have seen (and I have to admit I have tried this myself) is to do a sighted test where, unknown to the listener, A & B are identical. Often the listener is sure there is a difference. Sometimes, perhaps, even "night and day" :)

Tim
 
It seems to me that it is easier to demonstrate the fallibility of sighted tests than it is the fallibility of blind tests.

That's an excellent point - fundamentally. The problem I have with the audio industry is that it relies much too heavily on marketing the easily demonstrable.

Obviously, sighted listening is fallible and expectation bias is undesirable. Less obvious - but no less pernicious - is that blind tests make the listener the subject, not the equipment. The question posed is no longer: “Are A and B different?” It becomes: “Am I a good enough listener to differentiate them?” multiplied by “Are they in fact different at all? Or am I being tricked?”
See below . . .

Most of what you think you perceive is based on models you built rather slowly previously. In a blind audition, the subject is deprived of a pattern-modeling framework and asked to report a conclusion under pressure from novel, unflagged sense information.

The really interesting test would be compare the brain states (as far as we can infer them) of a subject during a blind test and in relaxed listening. That would put the conclusions in better perspective.

One trick I have seen (and I have to admit I have tried this myself) is to do a sighted test where, unknown to the listener, A & B are identical. Often the listener is sure there is a difference. Sometimes, perhaps, even "night and day" :)

Tim
 
The answer is simple, they weren't expert listeners.

They were so busy trying to perform an identification task that they simply were not listening to the music, so despite their rigour the tests were poorly designed. It happens, when I worked for a well known Swiss pharma giant they had to pull a trial after more than 18 months because analysis showed but wasn't empowered to deliver the results it had been conceived for. It doesn't mean double blind drugs trials are an invalid method for testing the efficacy of drugs, just like your example doesn't show that blind tests are worse than sighted tests for evaluating the sound of HiFi.
 
The answer is simple, they weren't expert listeners.

The answer we don't know enough about it. It is one of those anecdotes that does the rounds and is seized on by those who want to believe it as "proof" that blind testing does not work. However I can't find much in the way of non-anecdotal facts about the study and its results nor answers to the obvious questions. Shame as it is the sort of thing we could learn from.

Tim
 


advertisement


Back
Top