notevenclose
pfm Member
What are those pitfalls and flaws?
Aren't those what a laser reads on a CD?
What are those pitfalls and flaws?
Actually, I think the poster was suggesting, somewhat obliquely and not entirely free of snark, that measurement was the only reliable way to determine the height of the tree, and subjective impressions are variable and thus unreliable.
What are those pitfalls and flaws? So far you have not presented any evidence of those.
My point is that if, in half the cases, let’s say, I’m having my opinion totally reversed ... then how much ‘bias’ can there be? I suspect others aren’t so different! I also think that the theatrics involved in ‘proper’ blind testing are probably distracting enough to make results mostly meaningless ... youre throwing too much ‘noise’ into the signal, as it were ... I think it might be fun to be involved in a proper test like that but I’d rather not use it to make my own decisions where I have money and happiness on the lineBlind listening removes bias (conscious or unconscious). This is very useful IME.
But, hearing no difference doesn't mean there's no audible difference. Hearing no difference /sighted/ doesn't mean there is no audible difference. Regardless of sighted or blind: just a different time of day (different ambient noise) could yield a different result; not driving two hours to a bake-off just prior, could yield a different result; listening long-term could yield a different result; a less repetitious test with more listeners could yield a different result; different music ... etc. Just some examples. Yes it's possible to put together a listening test that would have some scientific weight. The chance of us punters doing it is roughly zero though. It's quite a task.
Blind is very good. It doesn't magically mean the test is absolutely conclusive and free of confounding variables. That's where some people go wrong.
But if I hear a major difference sighted and no/tiny difference blind, that indicates the source of most of the heard difference in the situation under test right now, is me. That can be very useful. It removes a whole class of false positives.
So in the home listening context you have to understand what blind listening gives you, and what it doesn't.
As far as I can see you have introduced an idea of blind test stress which may even be subliminal. But you have not provided any evidence that this phenomenon actually exists and influences the blind test results. So far it looks little more than an ad hoc argument to rescue the subjectivist position. In addition you seem to be shifting the burden of proof by asking others to disprove the ad hoc argument.Difficult to prove a negative. I've given some examples of where it would be straightforward to control against reasonably forseeable issues. They may be non-issues, but until it is shown that they are, I remain sceptical that the blind test is the be-all and end-all it is presented as by some on here.
And it would be such a potent weapon if those same people could show that their methodology was rigorous, effective and sensitive.
Does it really matter if I’m fooling myself into believing I hear a difference?
I agree. However I observe a core dogma in parts of the audiophile community which insists that preference is exclusively based on what is heard. So valid preference of other sorts gets expressed as sonic preference.I tend to agree that differences between digital products tend to get vanishingly small when you blind test them. On the other hand, audio isn't only about measured performance. We all have sighted preferences, and they matter. If when I play a CD through setup A and prefer it to setup B, and that preference is largely because setup A has a really cool looking dac and setup B doesn't, well, who cares. It's a valid preference, albeit not necessarily one based entirely on sound.
It stopped being difficult to design a transparent dac for reasonable money decades ago.
There isn't really any need to blind test them any more, or sighted test them for that matter....
I just use whatever I have to hand and don't worry about it.
What is being done is to postulate a scientifically untestable hypothesis. It is like claiming some DAC/cable/amplifier/... sounds bad because little green men from mars zap me with a ray-gun every time I try to listen to them. Prove it isn't so. When science and the real world fail to support pseudo-scientific nonsense this is perhaps the most effective ways to gain support from the "unintelligent and gullible" (to take what someone said earlier out of context).As far as I can see you have introduced an idea of blind test stress which may even be subliminal. But you have not provided any evidence that this phenomenon actually exists and influences the blind test results. So far it looks little more than an ad hoc argument to rescue the subjectivist position. In addition you seem to be shifting the burden of proof by asking others to disprove the ad hoc argument.
You persist in putting the cart before the horse. There are contributors here who regularly advocate the blind test as the best way to evaluate hifi. It is for those people to show why it is the best way, not for the rest of us to show why it isn't. I have asked a reasonable question - have you controlled your blind test to show that it is sufficiently sensitive? In other words, have you used your proposed methodology to successfully identify two items known to be different, where the difference is of the same general order as that being discussed? I can't recall anybody ever confirming this.As far as I can see you have introduced an idea of blind test stress which may even be subliminal. But you have not provided any evidence that this phenomenon actually exists and influences the blind test results. So far it looks little more than an ad hoc argument to rescue the subjectivist position. In addition you seem to be shifting the burden of proof by asking others to disprove the ad hoc argument.
It is like claiming some DAC/cable/amplifier/... sounds bad because little green men from mars zap me with a ray-gun every time I try to listen to them.
I'd have thought the accumulation of Junji's obsession over tiny details might add up to quite a lot?
Just to come back to this point a little more: the scientific method requires the designing of an experiment or a test, to prove or disprove a hypothesis. As part of that design, the tester is required to think about what confounding effects may be in play, and design them out, or control for them. If I were designing a blind test, I’d want to think about whether there might be any of the factors I mention, and control for them or otherwise show that they weren’t valid. If I didn’t do that basic due diligence, my experiments would lack sufficient rigour and the results might not be useful.As far as I can see you have introduced an idea of blind test stress which may even be subliminal. But you have not provided any evidence that this phenomenon actually exists and influences the blind test results.