advertisement


Anyone tried blind testing DACs?

Actually, I think the poster was suggesting, somewhat obliquely and not entirely free of snark, that measurement was the only reliable way to determine the height of the tree, and subjective impressions are variable and thus unreliable.

And so perhaps we get to the nub of all the discussion on this forum; I did not perceive that post the way that you have perceived it. Ergo, it matters not how 'well' something measures, we either like what we hear or we don't. But we knew that anyway.
 
I think at a dealer dem I’d be interested in switching between days without them telling me which one was in the chain. Provided of course they were level matched...
 
Octavian,

What are those pitfalls and flaws? So far you have not presented any evidence of those.

Does it really matter if I’m fooling myself into believing I hear a difference? I understand why double-blind placebo-control studies are done in medicine* and the stakes there are indeed high — sometimes literally life or death — but audio is just an enjoyable hobby. And if it does matter that audio kit is tested blind, should we blind test all consumer goods — toasters, hair dryers, vacuum cleaners, toilet seats, toothpaste,...?

If the answer is yes, I hope you put your money where your arse sits / shits.

Joe

* I used to work with clinical researchers at a school of pharmacy, so the importance of double-blind drug testing is most definitely not lost on me.
 
It stopped being difficult to design a transparent dac for reasonable money decades ago.

There isn't really any need to blind test them any more, or sighted test them for that matter....

I just use whatever I have to hand and don't worry about it.
 
Blind listening removes bias (conscious or unconscious). This is very useful IME.

But, hearing no difference doesn't mean there's no audible difference. Hearing no difference /sighted/ doesn't mean there is no audible difference. Regardless of sighted or blind: just a different time of day (different ambient noise) could yield a different result; not driving two hours to a bake-off just prior, could yield a different result; listening long-term could yield a different result; a less repetitious test with more listeners could yield a different result; different music ... etc. Just some examples. Yes it's possible to put together a listening test that would have some scientific weight. The chance of us punters doing it is roughly zero though. It's quite a task.

Blind is very good. It doesn't magically mean the test is absolutely conclusive and free of confounding variables. That's where some people go wrong.

But if I hear a major difference sighted and no/tiny difference blind, that indicates the source of most of the heard difference in the situation under test right now, is me. That can be very useful. It removes a whole class of false positives.

So in the home listening context you have to understand what blind listening gives you, and what it doesn't.
My point is that if, in half the cases, let’s say, I’m having my opinion totally reversed ... then how much ‘bias’ can there be? I suspect others aren’t so different! I also think that the theatrics involved in ‘proper’ blind testing are probably distracting enough to make results mostly meaningless ... youre throwing too much ‘noise’ into the signal, as it were ... I think it might be fun to be involved in a proper test like that but I’d rather not use it to make my own decisions where I have money and happiness on the line
 
Difficult to prove a negative. I've given some examples of where it would be straightforward to control against reasonably forseeable issues. They may be non-issues, but until it is shown that they are, I remain sceptical that the blind test is the be-all and end-all it is presented as by some on here.
As far as I can see you have introduced an idea of blind test stress which may even be subliminal. But you have not provided any evidence that this phenomenon actually exists and influences the blind test results. So far it looks little more than an ad hoc argument to rescue the subjectivist position. In addition you seem to be shifting the burden of proof by asking others to disprove the ad hoc argument.
 
And it would be such a potent weapon if those same people could show that their methodology was rigorous, effective and sensitive.

You need to distinguish between the mechanics (which can be awkward especially the level matching) and the methodology which is simple and logical; I am not sure how you can question the methodology really.

There is always wriggle room. You are taking the line, maybe something mysterious happens to hearing acuity when the brain is robbed of the knowledge of what is playing, or under the stress of a test. That is not a methodology issue though and can be analysed like anything else. As I noted, the fact that you can easily identify things like different CD masterings (when they really are different) is evidence that blind testing successfully shows differences that are in some cases pretty small. Frankly, it's a stretch to believe that a qualitative difference described as huge in sighted listing, but hard to identify in blind testing, is really a huge difference ("huge" is subjective I realise).

Other common objections are that short tests don't work - actually they seem to work better in most cases, but you can do long-term blind tests. Or that the equipment is not of sufficient standard to show some difference - this one is hard to defeat, in that it is always possible that different hardware yields different results, but equally you have to make some choices when setting up a test, and you can repeat them with better hardware of your choice, subject to availability. Makers of fancy interconnects etc could of course set up blind tests to show the benefits of their products but mysteriously, in general they do not.

Please note that many blind tests DO show differences. The failures are the best known (eg Meyer-Moran) but there are plenty of tests that do have positive results, the technique would be useless if it always yielded "unable to discriminate" results.

Tim
 
Does it really matter if I’m fooling myself into believing I hear a difference?

As you say, not life or death. But yes, it matters in several ways. If we are paying a 40% premium for hi-res files, for example, but they sound the same as 16/44, that is potentially a lot of money wasted. More profoundly, false claims have IMO held back the audio industry. Most of us are not scientists, we just want to enjoy the music. But actual advances in audio engineering have got lost in the noise from people peddling solutions to things that are not a problem.

Tim
 
I suspect that when you're dealing with small changes like the differences between two DACs, auditory memory might not be accurate enough to tell the difference when listened sequentially. I've found that the things that are easy to perceive with this kind of active listening are things like EQ changes, which often don't matter in the longer term, as the brain compensates for those anyway. Like a lot of skills, I suspect that the subconscious brain is much more powerful than the conscious in this realm (e.g: detecting aspects like "realism") but that's hard to access in the short term for any kind of objective measurement.
 
I tend to agree that differences between digital products tend to get vanishingly small when you blind test them. On the other hand, audio isn't only about measured performance. We all have sighted preferences, and they matter. If when I play a CD through setup A and prefer it to setup B, and that preference is largely because setup A has a really cool looking dac and setup B doesn't, well, who cares. It's a valid preference, albeit not necessarily one based entirely on sound.
I agree. However I observe a core dogma in parts of the audiophile community which insists that preference is exclusively based on what is heard. So valid preference of other sorts gets expressed as sonic preference.

The "high end" marketing people know this. So other product design and differentiation issues get rationalized using fanciful stories to persuade such potential buyers about why these issues mean the product will actually sound better.

I too recognize all sorts of valid preference issues. So, IMHO, blind tests, useful in professional environments, are of limited value in consumer environments. But I do find the fanciful marketing really annoying. I am lucky to have a local dealer who listens to potential customers and rapidly switches off the fanciful and switches on the practical when we talk about products.
 
I ignore all the marketing guff nowadays. And I quite like listening to opinionated audio designers going on about their obsessions, even if I think quite often that the tiny details they worry about probably make bugger all difference to how anything sounds. It's all part of the fun of niche audio. True, people get suckered into paying lots of money for shiny things which probably don't deliver anything special, but we're all grown-ups, and if the shiny thing makes someone happy, fair enough. But I speak as someone who is seriously considering spending £2K on a pair of LS3/5as, so I'm probably off my rocker anyway.
 
It stopped being difficult to design a transparent dac for reasonable money decades ago.

There isn't really any need to blind test them any more, or sighted test them for that matter....

I just use whatever I have to hand and don't worry about it.

Robert, do you have a price bracket you work within? Thinking about build quality, stuff like that.
 
As far as I can see you have introduced an idea of blind test stress which may even be subliminal. But you have not provided any evidence that this phenomenon actually exists and influences the blind test results. So far it looks little more than an ad hoc argument to rescue the subjectivist position. In addition you seem to be shifting the burden of proof by asking others to disprove the ad hoc argument.
What is being done is to postulate a scientifically untestable hypothesis. It is like claiming some DAC/cable/amplifier/... sounds bad because little green men from mars zap me with a ray-gun every time I try to listen to them. Prove it isn't so. When science and the real world fail to support pseudo-scientific nonsense this is perhaps the most effective ways to gain support from the "unintelligent and gullible" (to take what someone said earlier out of context).

The real give away though is the rejection of scientific knowledge on a subject. As soon as you see a person push it away rather than trying to work it into their model of what is going on there is no point in engaging in a rational evidence based debate. Agreement requires a shared basis of what is true to build arguments upon. As soon as a person rejects the scientific view in favour of a magical one as is normal with "subjective audiophile" enthusiasts there is no point seeking agreement based on scientific arguments. If you really feel a need to win such arguments I suspect you will need to go after what is causing the person to hold scientifically invalid views in the first place. That is likely to get unpleasant and defensive rather quickly.

The value people get out of a strong interest in home audio hardware varies a lot. I value a high technical performance for a modest cost but this is clearly of little interest to those that value expensive DACs, expensive cables, valve amplifiers, modest 2 way speakers with racks of expensive hardware, booming rooms, poor seat and speaker placement, speakers with strong sound effects, retro gear, etc... Are my values better than theirs or just different? Does it matter if people gain value from fairy stories rather than what is true in a scientific sense?
 
Never participated in a blind test of any hifi equipment. I can usually tell the difference between components, but never say never so I might give it a go later when the SMSL M400 arrives, see if I can spot it from my Project PB s2D.
 
As far as I can see you have introduced an idea of blind test stress which may even be subliminal. But you have not provided any evidence that this phenomenon actually exists and influences the blind test results. So far it looks little more than an ad hoc argument to rescue the subjectivist position. In addition you seem to be shifting the burden of proof by asking others to disprove the ad hoc argument.
You persist in putting the cart before the horse. There are contributors here who regularly advocate the blind test as the best way to evaluate hifi. It is for those people to show why it is the best way, not for the rest of us to show why it isn't. I have asked a reasonable question - have you controlled your blind test to show that it is sufficiently sensitive? In other words, have you used your proposed methodology to successfully identify two items known to be different, where the difference is of the same general order as that being discussed? I can't recall anybody ever confirming this.

Bear in mind, it is usually the blind test advocates raising it on the subjectivist discussion, so the onus is on them.
 
[QUOTE="sideshowbob, post: 4078687, member: 34"...even if I think quite often that the tiny details they worry about probably make bugger all difference to how anything sounds. I[/QUOTE]

Arf! Had a look at your own system lately? ;-)

I'd have thought the accumulation of Junji's obsession over tiny details might add up to quite a lot?
 
I'd have thought the accumulation of Junji's obsession over tiny details might add up to quite a lot?

I love the stuff he makes, and I like him as a person. I think he knows how to make good sounding equipment, but i've built a few chip amps myself and, although I'm happy to go along with the 47 Labs mythos - because it's fun - I don't really buy into it. He is a solid engineer though, and his aesthetic is right up my street, which is why I've used his stuff for 15 years now.
 
As far as I can see you have introduced an idea of blind test stress which may even be subliminal. But you have not provided any evidence that this phenomenon actually exists and influences the blind test results.
Just to come back to this point a little more: the scientific method requires the designing of an experiment or a test, to prove or disprove a hypothesis. As part of that design, the tester is required to think about what confounding effects may be in play, and design them out, or control for them. If I were designing a blind test, I’d want to think about whether there might be any of the factors I mention, and control for them or otherwise show that they weren’t valid. If I didn’t do that basic due diligence, my experiments would lack sufficient rigour and the results might not be useful.

It’s the same here. Advocates for blind testing seem not to want to verify that it’s a suitable tool for the job, but instead just to assert that it is. This doesn’t help their credibility as advocates for a scientific approach.
 


advertisement


Back
Top