advertisement


Anyone tried blind testing DACs?

Blind listening removes bias (conscious or unconscious). This is very useful IME.

But, hearing no difference doesn't mean there's no audible difference. Hearing no difference /sighted/ doesn't mean there is no audible difference. Regardless of sighted or blind: just a different time of day (different ambient noise) could yield a different result; not driving two hours to a bake-off just prior, could yield a different result; listening long-term could yield a different result; a less repetitious test with more listeners could yield a different result; different music ... etc. Just some examples. Yes it's possible to put together a listening test that would have some scientific weight. The chance of us punters doing it is roughly zero though. It's quite a task.

Blind is very good. It doesn't magically mean the test is absolutely conclusive and free of confounding variables. That's where some people go wrong.

But if I hear a major difference sighted and no/tiny difference blind, that indicates the source of most of the heard difference in the situation under test right now, is me. That can be very useful. It removes a whole class of false positives.

So in the home listening context you have to understand what blind listening gives you, and what it doesn't.
 
Last edited:
You have to live with something a while to get its proper character or nature, but I think of A/B testing as akin speed-dating to whittle out the non-compatibles. Enhanced dry treble? The same as initially finding out it has a pink Range Rover and 25 pet cats. Just no (well, personally). Finding that out unseen, if you can, would potentially remove other factors counter-influencing that finding which might be a blessing in the long run; but you should hopefully be able to trust your own judgement more than that.

(ps no, I've never done speed-dating!)
 
Hyperion, the tallest tree in the world, which required many photos to capture. The composite photo also has four people in it.

tree5.jpg
 
Listening to the Beekhuyzen video on hi-res got me thinking. His line is that the DAC makes a big difference, and I realised that while I have done a lot of blind testing of different formats (difference resolutions, PCM vs DSD) at Scalford and elsewhere, as well as the notorious Naim vs Yamaha amplifier comparison, I have never tried blind-testing DACs. Perhaps quite an effort to set up but would be interesting. Say, smartphone vs cheap DAC vs high-end. Do many such tests exist?

Still impressed with my Sony NW-A105 portable player - at £280 it’s not even high end by Sony standards, but it does seem to sound better than a smartphone, and if it does, it’s probably the DAC that makes the difference.

Tim

https://pinkfishmedia.net/forum/threads/dac-bake-off-south-08-02-14.150222/

and then:

https://pinkfishmedia.net/forum/threads/dac-bake-off-ii-electric-boogaloo-south-—-june-21-2014.151815/

And then a couple more that I cannot locate easily.

I’ve changed username from Vital to Whaleblue since, I’m case the quoted postings become confused. Assuming you have any appetite to read thousands of posts.

tl/dr - It seemed as though folk couldn’t easily differentiate a Sonos Connect against high end DACs in blind tests. There were/are no night and day (immediately obvious) differences. However, we largely ended up agreeing (or perhaps more accurately, I personally concluded) that blind tests are only part of the story. Long term listening can seem to reveal preferences.

Essentially what @darrenyeats said on page one of this thread.
 
I'm a fan of blind testing, especially when building/modding stuff. Ultimately though I have to live with the item as well, so it looks and functions plays a part.
 
I'm a fan of blind testing, especially when building/modding stuff. Ultimately though I have to live with the item as well, so it looks and functions plays a part.

I agree 100%. One reason I like Naim is for the iconic looks. I'm also partial to burnished aluminum cases, silky smooth weighted volume controls, beautifully crafted wooden speaker cabinets etc. To my mind paying for such things is normal and rational!

Tim
 
I think the sense of this is that it is still a tree, whether we perceive it to be 30 or 40m high, it's measurements make no difference, we enjoy it for what it is?
Actually, I think the poster was suggesting, somewhat obliquely and not entirely free of snark, that measurement was the only reliable way to determine the height of the tree, and subjective impressions are variable and thus unreliable.
Blind listening removes bias (conscious or unconscious). This is very useful IME.

But, hearing no difference doesn't mean there's no audible difference. Hearing no difference /sighted/ doesn't mean there is no audible difference. Regardless of sighted or blind: just a different time of day (different ambient noise) could yield a different result; not driving two hours to a bake-off just prior, could yield a different result; listening long-term could yield a different result; a less repetitious test with more listeners could yield a different result; different music ... etc. Just some examples. Yes it's possible to put together a listening test that would have some scientific weight. The chance of us punters doing it is roughly zero though. It's quite a task.

Blind is very good. It doesn't magically mean the test is absolutely conclusive and free of confounding variables. That's where some people go wrong.

But if I hear a major difference sighted and no/tiny difference blind, that indicates the source of most of the heard difference in the situation under test right now, is me. That can be very useful. It removes a whole class of false positives.


So in the home listening context you have to understand what blind listening gives you, and what it doesn't.

I broadly agree with most of this, but not the conclusion underlined in bold. That isn't the only possibility; a second possibility is that the blind test itself has introduced factors which have effectively dulled its sensitivity. I'll probably get more snark for this, but one such factor is possible (and possibly subliminal) added stress. There's a shift in what is under test, it's quite subtle but it moves away from the device and towards the listener's ability to discriminate. A second possible factor is that you also listen differently under blind conditions. If, when sighted, you listen to the music and whether you enjoy it more with device A or B, that is one type of listening. Blind test listening, however, has people listening for differences, which is a quite different sort of listening.

So the conclusion you draw needs to be highly caveated. Which, to my mind, renders a blind test rather less helpful.
 
That isn't the only possibility; a second possibility is that the blind test itself has introduced factors which have effectively dulled its sensitivity. I'll probably get more snark for this, but one such factor is possible (and possibly subliminal) added stress. There's a shift in what is under test, it's quite subtle but it moves away from the device and towards the listener's ability to discriminate. A second possible factor is that you also listen differently under blind conditions. If, when sighted, you listen to the music and whether you enjoy it more with device A or B, that is one type of listening. Blind test listening, however, has people listening for differences, which is a quite different sort of listening.

You can test this quite easily, by seeing if blind testing works on audio that has more obvious differences, such as different EQ. In general I don't think people have had any problem discriminating in such cases. I've been able to distinguish different CD masterings, for example, with Foobar ABX, without any problem. So you then have to explain why blind testing works in some cases but not (in your theory) in others.

Tim
 
one such factor is possible (and possibly subliminal) added stress. There's a shift in what is under test, it's quite subtle but it moves away from the device and towards the listener's ability to discriminate.

This is unquestionably true (indeed, been there myself several times)...and little is more pleasurable in hi-fi than watching the self-appointed guru squirm when the "obvious" suddenly becomes indistinguishable.
 
You can test this quite easily, by seeing if blind testing works on audio that has more obvious differences, such as different EQ. In general I don't think people have had any problem discriminating in such cases. I've been able to distinguish different CD masterings, for example, with Foobar ABX, without any problem. So you then have to explain why blind testing works in some cases but not (in your theory) in others.

Tim
Yes, completely agree. What it requires is some form of ‘control’ to determine the sensitivity of the blind test. Very rarely done, IME, but the proponents put it up as a gold standard regardless.
 
That isn't the only possibility; a second possibility is that the blind test itself has introduced factors which have effectively dulled its sensitivity. I'll probably get more snark for this, but one such factor is possible (and possibly subliminal) added stress.
This of course is the classic objection from the subjectivist fraternity. Yet there is practically zero evidence of this happening.
 


advertisement


Back
Top