advertisement


Power Cables. Are they overhyped? Part III

I've answered this many times, but happy to go again:

Before the results of a DBT can be taken as valid, you have to show that the test methodology is sufficiently sensitive to discern the small differences you are looking for. So some sort of control, where two items known to be subjectively (and measurably) different should first be tested under the chosen test methodology (which, by the way, is for the testers to determine) and only if the test results bear out the known outcome should the test proper go ahead. If the results of the control don't reliably identify the differences (ie to the required degree of statistical significance) then adjust the methodology and repeat until satisfied.

Not meaning to be captious, but how could you know they were subjectively different without testing them blind?

Another issue I have with this is that it allows a rather easy get out for critics of DBTs. Let's say our control involves a magnitude of audible difference between the two components of x. We find that we are able reliably to distinguish x in a DBT. All good so far: we have established that DBTs are able to distinguish differences of magnitude x.

Then say I propose another DBT of two different components. This time we fail to distinguish the components in a DBT. The critic of DBTs can now say: 'this test involves a difference of magnitude x-1, but we haven't established that DBTs can differentiate between differences of x-1, therefore I cannot accept the results of the test'.

You're left with the problem that any control of this type could be dismissed as being too easy.
 
The only thing to add to Jim's post above, and the point where crusaders like James Rhandi always fall down, is the subject of the test should be components where the user claims to be able to hear a difference, i.e. not pre-chosen to have similar electrical properties or whatever. With that caveat it is exactly what I've partaken in in the distant past with friends and found I could with 100% accuracy tell the difference between *certain* interconnects, speaker cables, and yes, even mains cables. Some do genuinely sound the same though!
 
I can never really follow that logic, you ensure that two components are level matched and switch between the two, if you hear a difference then you can then determine which you prefer, not knowing which of the components you are listening to simply negates bias.
Keith

This is an interesting idea. have you done it and was it easy to set up?
I would think it would require a source (CD ?) 2 identical interconnects from a switching box to each amp. You would have to establish that the amp's sounded the same with identical leads of course.
Arranging two of the same amp's amps could be a bit of an issue for our group, then there is the leads to the speakers to sort.
It's certainly not in the 2 hard basket.

Apart from one hilarious blind testing evening many years ago we have never really bothered with anything this formal, Just plugged a lead/item of equipment in and it's listen for a while then thumbs up or down, more or less. Someone might take a cable/s home and try them, report back along the lines of "it sounded good/crap in my system" and I think it could go well/poorly in yours.
As our focus is centered mainly on different music that is on offer at any given time most of the "testing' we do is as informal as that but it enables us to get an idea of how different equipment influences the sound through the speakers we use.
The fact that some of us have been doing this since the mid 80s and we have a lot of historical knowledge of how things sound and have changed over the years helps.

The majority of my friends and I see HI FI as a means to an end, that is the enjoyment of recorded music in our homes and when something like an inexpensive cable upgrade comes along and it works, we are pretty pleased about it.

Cheers,

Mr ED.
 
This is an interesting idea. have you done it and was it easy to set up?
I would think it would require a source (CD ?) 2 identical interconnects from a switching box to each amp. You would have to establish that the amp's sounded the same with identical leads of course.
Arranging two of the same amp's amps could be a bit of an issue for our group, then there is the leads to the speakers to sort.
It's certainly not in the 2 hard basket.

Apart from one hilarious blind testing evening many years ago we have never really bothered with anything this formal, Just plugged a lead/item of equipment in and it's listen for a while then thumbs up or down, more or less. Someone might take a cable/s home and try them, report back along the lines of "it sounded good/crap in my system" and I think it could go well/poorly in yours.
As our focus is centered mainly on different music that is on offer at any given time most of the "testing' we do is as informal as that but it enables us to get an idea of how different equipment influences the sound through the speakers we use.
The fact that some of us have been doing this since the mid 80s and we have a lot of historical knowledge of how things sound and have changed over the years helps.

The majority of my friends and I see HI FI as a means to an end, that is the enjoyment of recorded music in our homes and when something like an inexpensive cable upgrade comes along and it works, we are pretty pleased about it.

Cheers,

Mr ED.
Yes I have and continue to do so, here is the 'how-to' link again.
https://www.puriteaudio.co.uk/single-post/2017/02/08/Level-matching-for-fun
IME perceived 'sighted' differences tend to disappear when conducted 'unsighted'.
Keith
 
This is an interesting idea. have you done it and was it easy to set up?
I would think it would require a source (CD ?) 2 identical interconnects from a switching box to each amp. You would have to establish that the amp's sounded the same with identical leads of course.
Arranging two of the same amp's amps could be a bit of an issue for our group, then there is the leads to the speakers to sort.
It's certainly not in the 2 hard basket.

Apart from one hilarious blind testing evening many years ago we have never really bothered with anything this formal, Just plugged a lead/item of equipment in and it's listen for a while then thumbs up or down, more or less. Someone might take a cable/s home and try them, report back along the lines of "it sounded good/crap in my system" and I think it could go well/poorly in yours.
As our focus is centered mainly on different music that is on offer at any given time most of the "testing' we do is as informal as that but it enables us to get an idea of how different equipment influences the sound through the speakers we use.
The fact that some of us have been doing this since the mid 80s and we have a lot of historical knowledge of how things sound and have changed over the years helps.

The majority of my friends and I see HI FI as a means to an end, that is the enjoyment of recorded music in our homes and when something like an inexpensive cable upgrade comes along and it works, we are pretty pleased about it.

Cheers,

Mr ED.

Seeing as it's the mains lead, surely the easiest way is to have ONE system and switch the mains lead, rather than TWO systems which you turn on and off?!

On another note, I accompanied a serious (ie nutter*) hifi enthusiast friend to a demo of (very) expensive cartridges and I thought the whole thing was riddled with issues that made comparison very difficult. It needs another thread though.





*this is not a joke :)
 
Seeing as it's the mains lead, surely the easiest way is to have ONE system and switch the mains lead, rather than TWO systems which you turn on and off?!

On another note, I accompanied a serious (ie nutter*) hifi enthusiast friend to a demo of (very) expensive cartridges and I thought the whole thing was riddled with issues that made comparison very difficult. It needs another thread though.
*this is not a joke :)

1. That's the way we have done things over the years and as an aside, one episode of group blind testing is enough for one lifetime. :)

2. Nutters go with the territory and are part of HI FI's great tapestry, they can however be a wonderful source of knowledge.

3. I agree, cartridge demo's are not an easy thing to pull off.

Mr ED
 
I recall participating in an infamous mains cable test. It was carefully set up. 3 cables were supplied to each member of the panel. One a kettle lead, one a standard manufacturer's supply, and one something much fancier (in cost)...all were disguised with copious amounts of black tape (ish...it didn't matter...none of us knew which was which)...we listened for differences. Well, let's forget details for now. In the end, a statistically insignificant group could regularly distinguish between the cables, over repeated trials, and the majority could not. Science says therefore that there was no difference at all. It's apparently a black and white thing. It fails the stats boundary and therefore it doesn't exist. Leaving three of us puzzled and frustrated.
2 things I learnt from that. You don't have an opinion in Science unless you are BIG, and
Cables...all cables, do sound different, BUT those differences are so minute as to be near impossible to distinguish. My bottom line was that mains cable changes would make about 1 /500th of the difference made by swapping transducers. So, if you have the ears to hear it, and your speakers cost 1000 pounds, then you MIGHT find a £5 cable changes things for the better and is reasonable value to boot.
Me? I use a kettle lead and very happy with it, because, in my test, it was always the sound I preffered :)

PS. My firm opinion is that very very subtle differences in sound are impossible to distinguish on complex music passages, especially if the 'cable change' takes more than a few seconds...30 at most. It's vital to compare just a single instrument... something with timbre and complexity...a viola or cello is ideal, playing about one 30 second piece. THEN you stand a chance of hearing changes.
 
Not meaning to be captious, but how could you know they were subjectively different without testing them blind?

Another issue I have with this is that it allows a rather easy get out for critics of DBTs. Let's say our control involves a magnitude of audible difference between the two components of x. We find that we are able reliably to distinguish x in a DBT. All good so far: we have established that DBTs are able to distinguish differences of magnitude x.

Then say I propose another DBT of two different components. This time we fail to distinguish the components in a DBT. The critic of DBTs can now say: 'this test involves a difference of magnitude x-1, but we haven't established that DBTs can differentiate between differences of x-1, therefore I cannot accept the results of the test'.

You're left with the problem that any control of this type could be dismissed as being too easy.
Given the evidence that rapid switching tests are more sensitive than long term home listening and the absence (AFAIK) of any evidence of long term blah being more sensitive, one might conclude that this was a non issue.

But having nothing to say is not the same as saying nothing.
 
I recall participating in an infamous mains cable test. It was carefully set up. 3 cables were supplied to each member of the panel. One a kettle lead, one a standard manufacturer's supply, and one something much fancier (in cost)...all were disguised with copious amounts of black tape (ish...it didn't matter...none of us knew which was which)...we listened for differences. Well, let's forget details for now. In the end, a statistically insignificant group could regularly distinguish between the cables, over repeated trials, and the majority could not. Science says therefore that there was no difference at all. It's apparently a black and white thing. It fails the stats boundary and therefore it doesn't exist. Leaving three of us puzzled and frustrated.
I'm not sure this is strange at all. The small group can repeat the test until their results become significant. It's not rocket science. Out of a large group you expect a few to "succeed". If you want to find out whether they are individually special you have to have further tests on them. it may be necessary to do this over time to avoid fatigue.

I hope that helps; if not, there's a good chapter on this in Bad Science.

Sorry if I have misunderstood.
 
I will add another important factor in our group, we are very, very familiar with each others systems, some of which, apart from housekeeping, have not changed for years.
Introduce something new and it is immediately noticeable, we don't need any blind testing to hear the difference, be it an improvement or backwards step.

We also enjoy giving each other "helpful" advise, be it on the music being played or a change in the system. ;)

Mr ED
 
Im not sure they are. Neils comments above and those that rail against blind testing contradict the research

In Tools books he talks about the fact that there was not consistency in the speaker testing results until they started performing them blind.

I dont think speakers are any different to any other piece of hifi kit just because of their physical variation. Remember its not just the physical attributes, its also things like the price, the manufacturer qudos and reputation, and of course the individuals expectation bias.

You know, I paid 500 quid for this cable full of magic beans, that allegedly filters out all known interference and improves the dynamic current capabilty of my mains supply, so it must be making a positive difference.
I agree, some will be fooled by marketing bull but I think you underestimate the hifi buyer, it is a minority in my long experience, I have met a few people from here, plus many from other forum sites over the years, they are anything but fools who fall for such nonsense, as I pointed out earlier, if a cable costing £500 improves matters but can be made for a tenner, then it's the marketing & salesman that's the con, not the cable.
 
3. I agree, cartridge demo's are not an easy thing to pull off.

Mr ED

I did my own cartridge 'shootout' on my own system by digitising the same piece of music with 3 different cartridges that I happened to own. I then put the 3 files into Logic Pro, level balanced them, and aligned them so I could switch between them instantly and seamlessly. It was interesting because the differences between them were extremely easy to tell using this method (although not a 100% true shootout due to digitising them of course).

I've never tried a cable shootout / blind test. i imagine it would be very tricky due to the time between setup.
 
The only thing to add to Jim's post above, and the point where crusaders like James Rhandi always fall down, is the subject of the test should be components where the user claims to be able to hear a difference, i.e. not pre-chosen to have similar electrical properties or whatever.

The caveat I should add to that is wrt cases where:

The claimer says they can hear a difference but ascribes it to a reason that is *not* a well known source of audible differences.

e.g. where plain old cable resistance, etc, *will* produce a change that should easily be audible, but the claim is that the difference is due to some other fancy effect.

To test that some way to remove or distinguish the 'known and uncontroversial' mechanism from the claimed 'amazing' one needs to be employed.

Typically that means requiring something like speaker cables to have standard measureable properties like series resistance that allow us to expect they'll have the same effects on frequency response *in the test system* to some agreed level. e.g. +/- 0.1dB across the 20 - 20k range.

The point here is that - so far as I know - no scientist or engineer is claiming that cables *can't* 'sound different'. Just that some claimed mechanisms, etc, seem doubtful, whilst others are to be expected and are well understood. Need to sort the wheat from the chaff. Thus we need to remove the uncontentious reasons where something other is being claimed.
 
I'm not sure this is strange at all. The small group can repeat the test until their results become significant. It's not rocket science. Out of a large group you expect a few to "succeed". If you want to find out whether they are individually special you have to have further tests on them. it may be necessary to do this over time to avoid fatigue.

I hope that helps; if not, there's a good chapter on this in Bad Science.


Yes. The point is that purely by chance if you run enough tests on enough people *some* of them will give an 'all heads' outcome. So the fact that a few did this isn't significant in statistical terms.

Get enough people to toss a coin ten times in a row, and you'll get some who get 10 heads and some who get 10 tails. What then matters is if this occurs significantly (in a statistical sense) more often than you'd expect by random chance.

This is why trials may need to be done a number of times, etc.

As you say, the 'Bad Science' and 'Bad Pharma' books go into this. In the case of drugs companies it shows how they can 'game' the system by selecting which results to publish and which to bury. (Sometimes literally!) For reasons that have nowt to do with audio I really would urge people to read Ben's 'Bad Pharma' book. It contains some quite shocking details of how we end up paying for what goes on whilst been kept in the dark. It happens because most people don't know about it or don't understand how its done.
 
Given the evidence that rapid switching tests are more sensitive than long term home listening and the absence (AFAIK) of any evidence of long term blah being more sensitive, one might conclude that this was a non issue.

But having nothing to say is not the same as saying nothing.

Yes, it's almost certainly a non-issue, though I do occasionally wonder what sort of straw-clutching exactly we're dealing with.
 
The caveat I should add to that is wrt cases where:

The claimer says they can hear a difference but ascribes it to a reason that is *not* a well known source of audible differences.

e.g. where plain old cable resistance, etc, *will* produce a change that should easily be audible, but the claim is that the difference is due to some other fancy effect.

To test that some way to remove or distinguish the 'known and uncontroversial' mechanism from the claimed 'amazing' one needs to be employed.

Typically that means requiring something like speaker cables to have standard measureable properties like series resistance that allow us to expect they'll have the same effects on frequency response *in the test system* to some agreed level. e.g. +/- 0.1dB across the 20 - 20k range.

The point here is that - so far as I know - no scientist or engineer is claiming that cables *can't* 'sound different'. Just that some claimed mechanisms, etc, seem doubtful, whilst others are to be expected and are well understood. Need to sort the wheat from the chaff. Thus we need to remove the uncontentious reasons where something other is being claimed.

If that is true the nuance seems lost on most who are deeply entrenched on either side. I view myself as pretty central on the issue; I'm certainly a subjectivist as I prioritise my own impression/taste/perspective way above any notion of "accuracy" (which I'd argue was impossible in audio anyway), but I am certain all differences I hear could be measured and explained. I just feel many humans can hear quite subtle changes in capacitance, resistance, impedance, inductance or whatever, and probably more than those who stare at scopes and text books think they should be able to.
 
Yes. The point is that purely by chance if you run enough tests on enough people *some* of them will give an 'all heads' outcome. So the fact that a few did this isn't significant in statistical terms.

Get enough people to toss a coin ten times in a row, and you'll get some who get 10 heads and some who get 10 tails. What then matters is if this occurs significantly (in a statistical sense) more often than you'd expect by random chance.

This is why trials may need to be done a number of times, etc.

As you say, the 'Bad Science' and 'Bad Pharma' books go into this. In the case of drugs companies it shows how they can 'game' the system by selecting which results to publish and which to bury. (Sometimes literally!) For reasons that have nowt to do with audio I really would urge people to read Ben's 'Bad Pharma' book. It contains some quite shocking details of how we end up paying for what goes on whilst been kept in the dark. It happens because most people don't know about it or don't understand how its done.
One of the things which I still have difficulty getting my mind round is that it is impossible to tell, just by looking at a set of data in isolation, that it has been cherrypicked. It remind me of the thought experiment about whether you could tell how fast you were travelling (constant velocity) if you were in a sealed box.
 
I have found that they can make a dramatic difference although some PC's are probably over hyped, just like other products can be.
 
Of course it isn't difficult to conduct. It is, however, apparently more difficult to do so having done the controls I outline above. I say that because, in all the DBTs I hear about, almost none show that they have proven the sensitivity of the test to be adequate. Julf linked to a paper a while back which did address this, but I've forgotten the details. From what I recall it didn't actually give any results (in the sort of context we're talking about here), but did set out what looked like a reasonable control methodology.

The links I provided were to the ITU standard test methodologies/recommendations. Not specific tests with results.

What I would like to hear is how you provide a good control - how do you know that a small but measurable difference is actually audible (in order to use it to test your methodology).

Can you provide a pointer to some small, but provably audible, difference that could be used to prove or disprove a test method to your satisfaction?
 


advertisement


Back
Top