advertisement


Bruno Putzeys on audio pricing

Oh OK - I honestly didn't know where you were going with that line of enquiry. Perception is a function of the mind. Blind medical trials are effective because they separate the psychological element from the autonomous biological functions of the subject so we can tell whether a treatment really 'works' - ie, acts effectively on the subject's somatic biochemistry.

And as I have already pointed out to you, psychotropics work by altering the biochemistry. Hence the 'psychology' is dependent on the biology - you can't separate them. The test removes the expectation bias, not the effect.

However you can't remove the psychological element from the listening process: you could argue that it's 'only' a neuro-physiological process but that would be a little specious. The bottom line is that if the test method tampers with the mental state of the listener, it will inevitably distort the outcome. Is that clearer?

It's very interesting that Sean Olive has used blind trials to demonstrate the superiority of CD over MP3, and well engineered 'speakers over those less well engineered - linky. By your logic, I guess we should disregard those observations?
 
It's very interesting that Sean Olive has used blind trials to demonstrate the superiority of CD over MP3, and well engineered 'speakers over those less well engineered - linky. By your logic, I guess we should disregard those observations?

No, they are probably the only valid uses of blind trials in an audio context, because the differences between MP3 and CD, or between two different speaker designs, are comparatively gross differences. So, they'd show up.

It's like putting somebody in a 1960 Morris Minor, blindfolded, then putting them in a 2010 Mondeo. They'd easily tell which was which and, unless they were perverse, would prefer the Mondeo. ;)
 
The bottom line is that if the test method tampers with the mental state of the listener, it will inevitably distort the outcome.

Show me a test method that does not tamper with the mental state.

And show me a mental state that does not tamper with the test results.
 
Show me a test method that does not tamper with the mental state.

And show me a mental state that does not tamper with the test results.

Well, quite. So testing for changes in perception, ie changes in a mental state, are fundamentally problematic. Using a methodology from half a century ago might not be the best option.
 
No, they are probably the only valid uses of blind trials in an audio context, because the differences between MP3 and CD, or between two different speaker designs, are comparatively gross differences. So, they'd show up.

It's like putting somebody in a 1960 Morris Minor, blindfolded, then putting them in a 2010 Mondeo. They'd easily tell which was which and, unless they were perverse, would prefer the Mondeo. ;)

So you're saying that blind tests are valid when they give the expected result?

I'm glad that you like Sean Olive's work - have you read this article?
 
And as I have already pointed out to you, psychotropics work by altering the biochemistry. Hence the 'psychology' is dependent on the biology - you can't separate them. The test removes the expectation bias, not the effect.

Ultimately, yes - psychology depends on biology, then chemistry, then physics, then maths. We drift off topic, again, though. This was dealt with in earlier posts in the thread.

It's very interesting that Sean Olive has used blind trials to demonstrate the superiority of CD over MP3, and well engineered 'speakers over those less well engineered - linky. By your logic, I guess we should disregard those observations?

By mapping outcomes, we've logically shown that a positive (discriminating) cannot be false in this kind of test, whereas the compromised nature of the test method can easily produce a false negative (non-discriminating) result. It's all in earlier posts in the thread.

As I also said, I don't minimise the danger of expectation bias as (according to Sean) employees at HK did. However, (as I said) the results of tests with bias left in play frequently do not conform to type. We can't ignore that, either.

As I said, the problem is not that blind audio tests are 'too scientific' - it's that they are not scientific enough. Inappropriately borrowing a trial method from a different field, they sideline the object of the study.
 
Show me a test method that does not tamper with the mental state.

And show me a mental state that does not tamper with the test results.

Exactly the problem.

The other hickey is the reporting issue: in a medical trial, the outcome is frequently palpable - “the rash has/has not cleared up, Mrs. Robinson.”

The outcome of a perception test relies on the subject trying to describe verbally a complex and subtle brain process through subjective filters. Direct examination using MRI or other scanning techniques would give us a more credible (objective) report on what's actually going on during listening.
 
It's not the double blind test that is wrong in audio, but asking people for something as fickle as an opinion as an outcome. Measure their response, and it works fine.
 
Ultimately, yes - psychology depends on biology, then chemistry, then physics, then maths. We drift off topic, again, though. This was dealt with in earlier posts in the thread.

You're the one drifting. Try to stay on topic.

By mapping outcomes, we've logically shown that a positive (discriminating) cannot be false in this kind of test, whereas the compromised nature of the test method can easily produce a false negative (non-discriminating) result. It's all in earlier posts in the thread.

You might like to think so, but the null hypothesis is neutral. There are no positive or negatives at this stage. A difference is neither positive or negative, only different.

As I also said, I don't minimise the danger of expectation bias as (according to Sean) employees at HK did. However, (as I said) the results of tests with bias left in play frequently do not conform to type. We can't ignore that, either.

You admit that bias exists, yet you still don't want to use methodologies that seek to minimise bias

As I said, the problem is not that blind audio tests are 'too scientific' - it's that they are not scientific enough. Inappropriately borrowing a trial method from a different field, they sideline the object of the study.

You've still yet to produce evidence that blind trials in audio are inappropriate. I've provided you with seemingly good evidence that they are applicable with examples that they work.
 
You're the one drifting. Try to stay on topic.



You might like to think so, but the null hypothesis is neutral. There are no positive or negatives at this stage. A difference is neither positive or negative, only different.



You admit that bias exists, yet you still don't want to use methodologies that seek to minimise bias



You've still yet to produce evidence that blind trials in audio are inappropriate. I've provided you with seemingly good evidence that they are applicable with examples that they work.

You're talking a lot, but you're not saying anything. We've been over this.

Bias is bad: experiments should work to remove it. However the experiment must fit the objective. The devil is in the detail: flaws in the method may skew or enlarge the granularity of the results. Here, we have a psychological test. We can't just borrow a technique from drug trials and crowbar the results to suit our prejudice - any more than we can treat ADHD with a stent.

If you remove the bias and a subject can still discriminate characteristic superiority, that's an incontestably positive result: the test method is thereby proven legitimate - transparent.

If you remove the bias and the subject cannot discriminate, it may be there is no actual difference between objects, or it may be the test doesn't inherently permit fine-grained distinctions because the method is problematic. Whether you define this outcome 'positive' or 'negative' depends on the motive for the test! Similarly, a hearing test where the participants all wore pillows on their heads would produce unrealistically homogenised results. We illustrated this with lenses, before, you may recall.


My interpretation of the data is that blind audio tests prove the importance of perceptual frameworks. Which is inconvenient, because that's where the bias lives, too. Gross differences are fairly readily apparent when subjects don't know what they're listening to, but subtle ones are not. From this, it could be inferred that the test itself is too crude to be useful.

Bizarrely, you keep asking for evidence, but we all know the evidence: in blind listening tests, everyone struggles to hear differences they thought they could hear. We're discussing its logical corollaries and (apparently differing) conclusions we draw.
 
Methodologies are in video and PDF he links to: http://db.tt/eZ7HGbaw

Speakers were measured anechoically.



Indeed access to the AES paper would be ideal, but for me the take home message is that if a piece of equipment is measurably better in performance (assuming that those differences are accepted as being audible) it will be obvious in a level matched double-blind trial.

Hmm interesting results. I'm not sure I would agree with the conclusion except in it's grossest sense. The second most popular speaker certainly wasn't the second most accurate speaker, (speaker C clearly is). It's FR shows a classic boom and tizz response (at least at 0 deg which I'm assuming is how they were oriented relative to listening position). I would actually have ordered the speakers as A, C, D B. D before B because apart from being a bit ragged it is actually flatter than B from around 400hz up.

So actually the listeners rated arguably the least accurate speaker as their second choice. So the correlation of most accurate is preferred most, falls apart somewhat.
 
But that's my point. We don't all know what they mean. I have no idea what "timing" is, or "coherence" or "musicality" or any of the other words used to describe sound. I know what a dip of 2.3dB at 10kHz sounds like, but couldn't tell anyone else, they just have to hear it for themselves.

S

So the key point is that actually "a 2.3db at 10khz" is just as meaningless to the inexperienced as "better timing"??

It would certainly seem so. This is where we fall foul of believing that something has more value or meaning because it's quantifiable. As previously mentioned in passing, humans knew how to relate to each other that an new land was hotter or colder long before they had thermometers. By how much was always the real problem, and that could only come by relating to some known reference that both parties had experienced, e.g. cold enough the buffalo slept all day. Exactly the same issue exists despite the existence of (what are in effect still arbitrary) scales. Tell someone that it is 30 deg C in Rio and unless they had experienced a 30 deg C temperature themselves they wouldn't actually have any understanding of what the statement meant. Sure they could guess but they wouldn't know.
 
You're talking a lot, but you're not saying anything. We've been over this.

Perhaps you should re-read some of your self-contradicting posts.

Bias is bad: experiments should work to remove it. However the experiment must fit the objective. The devil is in the detail: flaws in the method may skew or enlarge the granularity of the results. Here, we have a psychological test. We can't just borrow a technique from drug trials and crowbar the results to suit our prejudice - any more than we can treat ADHD with a stent.

That is your opinion until you provide some evidence to the contrary. Can you actually provide any evidence as to the unsuitability of blind testing in audio?

If you remove the bias and a subject can still discriminate characteristic superiority, that's an incontestably positive result: the test method is thereby proven legitimate - transparent.

And that is what a blind test does. Only you don't seem to like them

If you remove the bias and the subject cannot discriminate, it may be there is no actual difference between objects, or it may be the test doesn't inherently permit fine-grained distinctions because the method is problematic.

And the most parsimonious of those two scenarios is likely to be one of no difference.

Whether you define this outcome 'positive' or 'negative' depends on the motive for the test!

The test is whether a difference exists, not on motive.

Similarly, a hearing test where the participants all wore pillows on their heads would produce unrealistically homogenised results. We illustrated this with lenses, before, you may recall.

So all the participants wearing 'attenuators' homogenises the results - again interesting logic.

My interpretation of the data is that blind audio tests prove the importance of perceptual frameworks. Which is inconvenient, because that's where the bias lives, too. Gross differences are fairly readily apparent when subjects don't know what they're listening to, but subtle ones are not. From this, it could be inferred that the test itself is too crude to be useful.

So, back to critiquing the test methodology rather than considering that there really might not be the differences claimed.

Bizarrely, you keep asking for evidence, but we all know the evidence: in blind listening tests, everyone struggles to hear differences they thought they could hear. We're discussing its logical corollaries and (apparently differing) conclusions we draw.

You seem to be denying the conclusions of the evidence I present and offer none of your own.
 
So the key point is that actually "a 2.3db at 10khz" is just as meaningless to the inexperienced as "better timing"??

It would certainly seem so. This is where we fall foul of believing that something has more value or meaning because it's quantifiable. As previously mentioned in passing, humans knew how to relate to each other that an new land was hotter or colder long before they had thermometers. By how much was always the real problem, and that could only come by relating to some known reference that both parties had experienced, e.g. cold enough the buffalo slept all day. Exactly the same issue exists despite the existence of (what are in effect still arbitrary) scales. Tell someone that it is 30 deg C in Rio and unless they had experienced a 30 deg C temperature themselves they wouldn't actually have any understanding of what the statement meant. Sure they could guess but they wouldn't know.

Yes, but "a 2.3dB dip at 10kHz" means the same to all those who understand the meaning of dBs and kHz. Better timing means nothing to anyone except the person making the statement, as it can mean whatever one wants it to mean.

Precision always requires knowledge. Most of the medical terms used between doctors are incompehensible to me, as I don't have the medical training, but I am very glad there are those to whom it makes sense. I wouldn't want my GP to ask a specialist how to cure a gammy leg or gippy tummy. Just as I wouldn.t want a HiFi Dealer to try and sell me an amplifier on the basis of Musicality or better PRaT.

S.
 
Perhaps you should re-read some of your self-contradicting posts.

That is your opinion until you provide some evidence to the contrary. Can you actually provide any evidence as to the unsuitability of blind testing in audio?

And that is what a blind test does. Only you don't seem to like them

And the most parsimonious of those two scenarios is likely to be one of no difference.

The test is whether a difference exists, not on motive.

So all the participants wearing 'attenuators' homogenises the results - again interesting logic.

So, back to critiquing the test methodology rather than considering that there really might not be the differences claimed.

You seem to be denying the conclusions of the evidence I present and offer none of your own.

Please don't be offended if I stop talking to you.
 
Yes, but "a 2.3dB dip at 10kHz" means the same to all those who understand the meaning of dBs and kHz. Better timing means nothing to anyone except the person making the statement, as it can mean whatever one wants it to mean.

Precision always requires knowledge. Most of the medical terms used between doctors are incompehensible to me, as I don't have the medical training, but I am very glad there are those to whom it makes sense. I wouldn't want my GP to ask a specialist how to cure a gammy leg or gippy tummy. Just as I wouldn.t want a HiFi Dealer to try and sell me an amplifier on the basis of Musicality or better PRaT.

S.

What hi-fi dealers do to sell products is a different question! Not that I'm wanting to defend the audio trade (which even I find to contain a higher-than-average proportion of sharks and charlatans) we are in the position of fielding a large, diverse body of opinion: we get to see thousands of reactions to equipment and recording. Mostly people struggle to communicate what they're experiencing: but they're experiencing something real.

Sometimes they will comment on a technical issue such as bump or dip in the frequency response; sometimes they will note specific instrumental or recording characteristics; sometimes they will just describe a gut reaction to what they're hearing. From a human viewpoint, I think it would be arrogant to say any of those response are less valid.

There's a shared colloquial frame of reference in which the term 'musical' captures something helpful about their listening impressions. It's not a term I find useful personally, but if a customer describes something they don't like, and asks for something more 'musical', I understand them best if I've been able to establish what pushes their buttons - which varies with the individual.
 
item doesn't do "evidence".

Tim

I know I go on, but it's galling to have to keep repeating myself: we're not debating evidence; we're discussing the conclusion. If we can't distinguish the two, the conversation really is doomed.

The evidence is this: blind listening tests overwhelmingly produce results in which subjects fail to distinguish what's in front of them.

Very interesting. But to equate the credibility of blind medical trials with psych tests is lazy and ill-considered, as I've explained painfully and at length. Enough said, perhaps.
 
The evidence is this: blind listening tests overwhelmingly produce results in which subjects fail to distinguish what's in front of them.

Only in certain categories. I've not heard of failure to identify differences in loudspeakers, or turntable systems, for example?

There is still value in blind tests for eg loudspeakers because you will focus on the sound, not the appearance, design prejudices you might have (metal tweeters sound grating, ports chuff etc).

Tim
 
I know I go on, but it's galling to have to keep repeating myself: we're not debating evidence; we're discussing the conclusion. If we can't distinguish the two, the conversation really is doomed.

The only conclusions are your opinions - seemingly based on preconceptions that you don't like to have challenged. Hence your unwillingness to present any evidence that supports your 'conclusions'.

The evidence is this: blind listening tests overwhelmingly produce results in which subjects fail to distinguish what's in front of them.

Where is this evidence? You've provided none yet.

Very interesting. But to equate the credibility of blind medical trials with psych tests is lazy and ill-considered, as I've explained painfully and at length. Enough said, perhaps.

You've demonstrated nothing.
 


advertisement


Back
Top