where is all the debate on constant current loading V bootstrapping? or zero negative feedback...
But, there is debate. Who says there isn't? In all your gazillion years of experience you must be aware that there are electronic engineers who maintain that all amplifiers sound the same? I have seen and debated these people so I can't accept that you are unaware of them? Like you, they have technical qualifications but they disagree with you on whether or not these aspects of design make any difference. So who's right? The engineer with the most tools and the best lab coat? You're all electronic engineers but some of you have to be wrong, right? And blind testing 'proves' you are correct? But hang on, blind testing also 'proves' that Mr All Amps Sound The Same is right too! So what way do you want to go? Can't have your cake and eat it.
Many early engineers held that a correctly designed valve amp and transistor amp would sound the same. They were wrong, but why did they think that? Because they thought that what they could measure at that time was all there was to know. Their knowledge was lacking and it still is. If the theory behind Hi-Fi design was as well understood as they suggested there would be no need to test anything. You could build your kit straight from the plans, knowing what it would sound like. But of course Hi-Fi manufactures don't do that. They build prototypes, listen to them and change stuff. Why? Because the paper specifications cannot tell you everything there is to know about how a piece of Hi-Fi equipment will perform.
Ah, but the blind test! The gold standard eh? The silver bullet to end all argument, silence these foolish audiophiles. Let's talk about that then? On the face of it blind tests look unquestionable but the results often sit ill with many of us. There is a reason for that. If you start to pick apart the actual mechanics of ABX blind testing you'll find a house of cards which can be unreliable at best and fraud at worst.
The word 'test' certainly sounds scientific but is it? Firstly, it is not an objective test as proponents claim! It is still only a subjective test. The subjects under 'test' are the same human beings who claim to hear differences under normal listening conditions. The assertion is that human beings who get it wrong normally will get it right under ABX test conditions. Does this make any sense? It is disingenuous to suggest that because the listeners have been incorporated in some specific procedure that you have removed all possibility of human error. Sighted test are slammed because the conditions of the tests mean that the listener is likely to be biased towards hearing differences. Fair enough. Yet surely in the methodology of the blind test the listener could equally be misled into erroneously thinking there are no differences? If people can get it wrong in one way surely they are equally fallible the other? I have yet to see anyone who supports ABX testing admit this possibility, despite the fact it's pretty reasonable and logical.
The only examples of 'scientific' ABX tests I've seen details of have been ridiculously simplistic. A thirty-second blip of one piece of music. The testing methodology more or less dictates that short sections of music are used. Yet the ABX zealots claim this thirty-seconds carries more weight than months or years of real world use covering all types of music and listening volumes. Thirty seconds is all it takes for them to be correct about their conclusions yet we are deluded, every day of our whole lives! What is illogical in saying that more exposure to a subject allows more information to be gathered?
Secondly, we don't even listen to things in the same way under test conditions. This is a scientific fact as brain scans show that different parts of the brain are working depending on whether we're concentrating, trying to hear differences, or just relaxing. My experience certainly supports that. I can recall countless instances were I've picked up nuances or details in music I thought I knew very well during relaxed casual listening, maybe even background listening. If the methodology of the short, blind test was correct then this would not happen, as all relevant information would be assimilated on the first listen or two. If it is possible to miss significant musical information on the first, second or tenth listen then surely it must be possible to miss slight tonal, resolution or dynamic differences on a thirty-second test?
And the differences often are slight. How many times have you played upgrades or changes in your Hi-Fi system to family members or friends uninterested in Hi-Fi who have said that they could not tell the difference? Everything is relative and in Hi-Fi we often deal with changes that are relatively small. We stack up these small changes to create larger shifts in performance. For most 'normal' people though these small differences are not significant, especially in the context of the overall level of performance of the system as a whole.
So, moving on to the methodology of the test.
Let's say I set up a large TV screen and flashed a picture on to the screen for thirty seconds, say a street scene. An open market in a town square, lots of people milling around, maybe some birds passing in the clear blue sky, could be anything but you get the idea. Thirty seconds, screen goes blank.
Next, I flash up another image, also thirty seconds, and it's the same scene. Same birds, same grave-dodgers feeding them, yada yada, and I ask you what differences you've seen? You reply 'None. They were the same picture'.
In the short time you had to look at the image you had just enough time to get the big picture. On the macro level they look identical but you did not have the time to see that there was different fruit on one of the stalls, the old guy on the bench had a hat on or any number of variations between the two images. One image could have been slightly sharper than the other but in thirty seconds, could you be sure of that?
And here's the kicker. Let's say that in the second image you think you spot a guy standing by a lamppost who was not in the first image. You cannot be
sure that he wasn't in the first image or that you
just didn't notice him!
But before you get your head around that it's gone and I've flashed the image up again, but is it the first one or the second one? I'm not telling you, and in your mind the images are now overlapped and blurred together. So are they the same or not? Well yes, on a macro level but beyond that, you can't be sure but too late, we move on to another picture and the whole confusing cycle starts again and before long you're loosing all perspective and probably the will to live!
Isn't it interesting the way that things that on the face of it might be considered gross shortcomings in the test methodology are twisted around to confer advantage instead? Logic would suggest that the longer one has to examine one's subject the more likely it is that you'll gather the highest possible amount of information and draw a more accurate conclusion. Instead, test participants are typically snow-blinded with snippets of sound repeated over and over again. I've done blind tests, this is exactly what happens.
Is there evidence that blind tests do not work? Yeah, there is. Here is a famous example of the failure of blind testing explained by Robert Hartley in The Absolute Sound Issue 183:
"Every few years, the results of some blind listening test are announced that purportedly “prove” an absurd conclusion. These tests, ironically, say more about the flaws inherent in blind listening tests than about the phenomena in question.
The latest in this long history is a double-blind test that, the authors conclude, demonstrates that 44.1kHz/16-bit digital audio is indistinguishable from high-resolution digital. Note the word “indistinguishable.” The authors aren’t saying that high-res digital might sound a little different from Red Book CD but is no better. Or that high-res digital is only slightly better and not worth the additional cost. Rather, they reached the rather startling conclusion that CD-quality audio sounds exactly the same as 96kHz/24-bit PCM and DSD, the encoding scheme used in SACD. That is, under double-blind test conditions, 60 expert listeners over 554 trials couldn’t hear any differences between CD, SACD, and 96/24. The study was published in the September, 2007 Journal of the Audio Engineering Society."
This one's a cracker!
"This test was conducted by Swedish Radio (analogous to the BBC) to decide whether one of the low-bit-rate codecs under consideration by the European Broadcast Union was good enough to replace FM broadcasting in Europe.
Swedish Radio developed an elaborate listening methodology called “double-blind, triple-stimulus, hidden-reference.” A “subject” (listener) would hear three “objects” (musical presentations); presentation A was always the unprocessed signal, with the listener required to identify if presentation B or C had been processed through the codec.
The test involved 60 “expert” listeners spanning 20,000 evaluations over a period of two years. Swedish Radio announced in 1991 that it had narrowed the field to two codecs, and that “both codecs have now reached a level of performance where they fulfill the EBU requirements for a distribution codec.” In other words, Swedish Radio said the codec was good enough to replace analog FM broadcasts in Europe. This decision was based on data gathered during the 20,000 “double-blind, triple-stimulus, hidden-reference” listening trials. (The listening-test methodology and statistical analysis are documented in detail in “Subjective Assessments on Low Bit-Rate Audio Codecs,” by C. Grewin and T. Rydén, published in the proceedings of the 10th International Audio Engineering Society Conference, “Images of Audio.”)
After announcing its decision, Swedish Radio sent a tape of music processed by the selected codec to the late Bart Locanthi, an acknowledged expert in digital audio and chairman of an ad hoc committee formed to independently evaluate low-bit rate codecs. Using the same non-blind observational-listening techniques that audiophiles routinely use to evaluate sound quality, Locanthi instantly identified an artifact of the codec. After Locanthi informed Swedish Radio of the artifact (an idle tone at 1.5kHz), listeners at Swedish Radio also instantly heard the distortion. (Locanthi’s account of the episode is documented in an audio recording played at workshop on low-bit-rate codecs at the 91st AES convention.)
How is it possible that a single listener, using non-blind observational listening techniques, was able to discover—in less than ten minutes—a distortion that escaped the scrutiny of 60 expert listeners, 20,000 trials conducted over a two-year period, and elaborate “double-blind, triple-stimulus, hidden-reference” methodology, and sophisticated statistical analysis?"
Another notable example is the blind listening test conducted by Stereo Review that concluded that a pair of Mark Levinson monoblocks, an output-transformerless tubed amplifier, and a $220 Pioneer receiver were all sonically identical. (“Do All Amplifiers Sound the Same?” published in the January, 1987 issue.)
I contend that such results are an indictment of blind listening tests in general because of the patently absurd conclusions to which they lead. Anyway, I need to go away from the computer and actually do something! ;0)