advertisement


Bruno Putzeys on audio pricing

A properly constructed needledrop test would do this trivially. If I had the equipment and route to publish it I would have done so. Pfm now cannot publish it, and before then no one with the appropriate reputation of neutrality did so ...
I believe AudioSmile can publish recorded excerpts.

Equipment is no problem, where are you?

Paul
 
Well, if you're devising a test to test perception of music, using music is probably more relevant than feeding them toast*.


*for darryl's benefit, this is a random choice of word. Toast has not, AFAICS, been referred to previously and isn't really relevant.
 
I believe AudioSmile can publish recorded excerpts.

Equipment is no problem, where are you?

Paul

I am near Worcester UK. I can record onto CDR from the preamp. I have a friend who owns a Naim power lead so could swap power leads on my CDP and record the changes in a sequence of random change/no change pairs. If someone knows how to convert the CDR into needledrops then we are there. My only concern is that I might be accused of rigging the test because I am perceived as being a "subjectivist". Much better if someone like Tony L did it.

Nic P
 
Just rip the CDR onto a PC in the usual way, you should only need to record each state once per track, and using a range of tracks gives you more options come the listening.

Paul
 
Just rip the CDR onto a PC in the usual way, you should only need to record each state once per track, and using a range of tracks gives you more options come the listening.

Paul

I have never ripped a track onto my PC (which is a Mac). I am hoping to meet a friend who understands such things and will ask his advice/help. You may guess that I am not into this Computer Audio world.

Nic P
 
It's expensive and frustrating to contrive a system that does all these with a sufficiently high degree of realism to deserve the name 'high fidelity'. For as long as we have different listening priorities, there will always be a diverse market for differently voiced products.
"High fidelity" in audio is as well-defined as "high performance" is in automobilia. There are no standards for either, hence some marketeers have no reservation in labelling a midi-system with puny loudspeakers "hifi".

It would be much better if "high fidelity" had performance parameters (a bit like THX) that applied to the overall system (including the room). Personally, whilst a Naim 555/552/500/Kan might give the listener a great deal of pleasure, it would fail a FR spec of 20Hz - 20kHz +/-3dB and hence could not be considered "high fidelity" by such a definition.

However, we also know that meeting such a spec is no guarantee of engagement or enjoyment. I have heard truly impressive Mark Levinson / Wilson Audio systems that probably meet that FR spec - but I've found the experience rather clinical and, frankly, a bit ho-hum once the novelty wore off.

James
 
It irritates me when people don't read the posts they are commenting on ... the poster said it was a POSSIBILITY that the test was flawed.

Nic P

I know. All I was doing was was remarking that there was another possibility other than the one posited. Do try to keep up.
 
That goes without saying: it's the 'other' possibility. I'm not ruling either out. I'm just pointing that this kind of test is only conclusive when the outcome is positive.

But retention of the null hypothesis (no difference between the items) isn't a negative result, only that no difference is discernible. It is in effect a test of neutrality. If however a difference is discernible one can then test if one is better than the other in a positive or negative way.
 
I have a problem with music sounding more or less coherent, as I don't know what that means. As there are no units for coherence, I wouldn't know how to measure it, and so couldn't relate anything I might hear to differences in coherence.
Coherence is not measureable in anyway I can conceive, but I can tell whether an audio system is sufficiently coherent or not by listening. Some of the artefacts of coherence are timing, clarity, intelligibility and seamless transition from one driver to the other. I don't have measures for those either.

Phase shift of whatever number of degrees at the crossover is the normal state of affairs, and as I said in regards of the square-wave, isn't audible, so I can't say what effect it mnight have on coherence, as we don't seem to have a definition for coherence anyway.
Phase shifting is, as you say, normal with filters (mechanical and electrical in combination). But if the transfer function is not accurate (rarely so because of non-linear native FR), then the phase shift is not as expected either. The key to loudspeaker coherence is making sure the relative phase between driver pairs is constant (for odd-order) or zero (for even-order). When the relative phase varies within the crossover region, you lose coherence. This has nothing to do with the inability of loudspeakers to reproduce square waves, which can be attributed to their limited bandwidth.

James
 
This has nothing to do with the inability of loudspeakers to reproduce square waves, which can be attributed to their limited bandwidth.

A fair point, I think. We know that a loudspeaker can't accurately reproduce a square wave, so we don't really know whether what emerges from the loudspeaker with the unfiltered signal is distorted in a way which makes it look more like the filtered signal.

One problem with A/B testing is that people listen for different things. A gross difference in the amount of bass is likely to be perceived by most people, but fewer will notice if the timing is a little off with one DUT compared to the other, and some listeners will be sensitive to phase changes where others admit they aren't. So A/B testing might give us gross differences, but is unlikely to show us the more subtle, but ultimately more musically satisfying, differences.

My personal test is whether, over a period of weeks, I find myself listening to more (and more varied) music, or less. If less, then the DUT is not as good as what it replaced.
 
Coherence is not measureable in anyway I can conceive, but I can tell whether an audio system is sufficiently coherent or not by listening. Some of the artefacts of coherence are timing, clarity, intelligibility and seamless transition from one driver to the other. I don't have measures for those either.


Phase shifting is, as you say, normal with filters (mechanical and electrical in combination). But if the transfer function is not accurate (rarely so because of non-linear native FR), then the phase shift is not as expected either. The key to loudspeaker coherence is making sure the relative phase between driver pairs is constant (for odd-order) or zero (for even-order). When the relative phase varies within the crossover region, you lose coherence. This has nothing to do with the inability of loudspeakers to reproduce square waves, which can be attributed to their limited bandwidth.

James

So sadly, you're describing one meaningless word by several others. What are the units for Timing, clarity etc. These don't have units of measure either.

You know what you mean, but I have no idea what you mean. That's the problem relating subjective impressions to others.

S.
 
I changed a Naim 250 in my main system for first a TEAD Linear A and then a Hovland Radia. Could I spot the differences 100% of the time in a blind test ... doubt it. Sue and I now listen to our system for about five times more than the 250. I think this supports your view.

Nic P

You can do long-term blind tests in principle - I think this is an interesting area to investigate, though I believe some work has been done on the effectiveness of short vs long listening tests to identify differences.

Tim
 
So sadly, you're describing one meaningless word by several others. What are the units for Timing, clarity etc. These don't have units of measure either.
I've already conceded that there are no measures for these attributes, but it doesn't mean they don't exist. What's your measure for excellent vs average wine?
 
So sadly, you're describing one meaningless word by several others. What are the units for Timing, clarity etc. These don't have units of measure either.

You know what you mean, but I have no idea what you mean. That's the problem relating subjective impressions to others.

S.

This is the problem I'm trying to illustrate: it's not that these terms are not 'scientific' per se: we all know what they mean, and relate them to specific listening impressions. The issue is that, currently, we don't have a scale to measure them by, or a convenient method by which to examine them mechanically. It used to be the same with temperature . . .

If we're going to invoke the scientific method, we have to use it all the way - at least we should be conscious of what we can't do with it at present, and seek to enlarge the scope of its power in order to bring everything pertinent under objective scrutiny.
 
I've already conceded that there are no measures for these attributes, but it doesn't mean they don't exist. What's your measure for excellent vs average wine?

If wine = music then hi-fi = glass.

What's your measure for an excellent vs average glass and does wine taste better from an excellent glass?

It kind-of does, but the wine is the same.

Tim
 
This is the problem I'm trying to illustrate: it's not that these terms are not 'scientific' per se: we all know what they mean, and relate them to specific listening impressions. The issue is that, currently, we don't have a scale to measure them by, or a convenient method by which to examine them mechanically. It used to be the same with temperature . . .

If we're going to invoke the scientific method, we have to use it all the way - at least we should be conscious of what we can't do with it at present, and seek to enlarge the scope of its power in order to bring everything pertinent under objective scrutiny.

I don't see the problem. Assessing a DAC? Blind test it, and if DAC A is brimming over with better timing, coherence, inky blacks etc so that you can easily distinguish it from DAC B, then we have learned that DAC A is "better" even if we are not sure what to measure.

If, on the other hand, DAC claims better timing, coherence, air, and inky blacks, but in blind testing it is indistinguishable from cheapo DAC B, then we have learned that the claims were marketing guff.

Tim
 
In all DAC tests of all kinds, there tends to be little agreement about which is 'best'. As I've stressed, the inherent characteristic of each DAC - inevitable, given their varying design - hits or misses individuals' subjective musical pleasure centres. That's partly why we have a diversity of models on the market.

You flat-out cannot equate the credibility of biologically-oriented blind medical trials with psychologically-oriented blind tests for audio perception.
 
In all DAC tests of all kinds, there tends to be little agreement about which is 'best'.

I just used DACs as an example. Listen for all the mystical characteristics of music you like, and for as long as you like, but do so blind and compare with others blind and you will discover whether there are differences and be able to assess them.

Tim
 


advertisement


Back
Top