advertisement


How do the 'non-subjectivists' choose their hi-fi systems?

If you exceed the bandwidth limit or maximum signal level of an amp, bad things can happen. Don't do that.
OK, understood. But sometimes those parameters seem to get exceeded in 'normal' operation. You could always revert to the idea that this means the amp isn't competently designed, I suppose, but I'm not sure I agree. Jim seems to be talking about a 'rate of change' of a parameter which exceeds anticipated limits, eg a particularly demanding transient, perhaps. If so, then what you propose would, presumably, preclude playing such music through the device?
 
Isn't it your turn to demonstrate that the said AES papers are following standard DOE protocol?

Nope. You'e the one making the contentious argument that pretty much the bulk of pro tests are flawed. Simply referencing a webpage about the "DOE" doesn't show that. Only that you can google a webpage.

Finding at least a few pro journal papers you can reference that agree with you would be a start. But note that test reports in my experience show that the methods used vary in accord with the purpose, etc, of the test. So you'd need to avoid cherry picking in you want them to evidence that the same method should always be applied in all listening tests for the results to be reliable. So you'd need some diversity in the nature of what was being tested, etc. ideally, you should be able to find some pro papers/reports we can read that agree with you that all listening tests have to be as you say.

I'm happy for people to read the existing reports and papers and make up their own minds. I'd just repeat the above point that the details will vary with the nature of the test, and that some tests will be better run than others. So examples vary. Thus you need to avoid cherry picking if you want to draw an overall conclusion.
 
Thanks Jim, if I understand correctly (not, by any means, a given...) this is the sort of potential problem I was trying to articulate in post #945 above.

If so, then this makes mansr's assertions about the predictability and knowability of system behaviour by being able to know the transfer function, somewhat more complicated in the real world?

If curious have a look at
http://jcgl.orpheusweb.co.uk/history/sa_1990-2/ChaosAndNoise.html
and read the bits on 'chaotic oscillations'. There is also a pdf of the old
article I wrote on it for EW here:

http://jcgl.orpheusweb.co.uk/history/sa_1990-2/AToolkitForChaos.pdf

You can see how the combination of nonlinearity, a 'state memory', and feedback can go Ooops! No real audio amp is likely to do this if the designer had a clue. But effects like this can catch out the unwary. e.g. in low-bit processing of IIR filters, but also in analog stuff.
 
OK, understood. But sometimes those parameters seem to get exceeded in 'normal' operation. You could always revert to the idea that this means the amp isn't competently designed, I suppose, but I'm not sure I agree. Jim seems to be talking about a 'rate of change' of a parameter which exceeds anticipated limits, eg a particularly demanding transient, perhaps. If so, then what you propose would, presumably, preclude playing such music through the device?
Audible sound is limited to ~20 kHz. Music can't be more demanding than that.
 
The ultrasonic noise should have been no surprise. It's impossible to make a modulator without it. 1-bit sigma-delta DACs had been around for quite some time too. The only new thing was using that signal in the distribution format. On the second part, you are almost right. It's the modulator (ADC/encoder) that goes crazy. The DAC is just a plain old linear low-pass filter.

Yes. Haze of memory. :)
 
FWIW I learned early on as someone working on analogue electronics for measurements that "signal conditioning" was essential. In particular to define the bandnwidth of what you would let into any active electronics. So, for example, I always put a passive filter at the front of a power amp to roll off ultrasonic garbage or near dc. This make it easier to avoid TID because if the max input signal *level* before the filter is below the specified overload voltage you can dodge TID because the slew rate presented to the active elements is kept down. (Perhaps worth adding this also means you can reduce problems due to RF getting in that way causing problems.) So 'TID' wasn't a surprise for me as I'd had to think about this before I started working on audio for a living! I was surprised that some audio engineers were surprised! :)
 
Audible sound is limited to ~20 kHz. Music can't be more demanding than that.

However you may need to avoid having a filter that affects what *is* audible. Hence I tended to go for a wider bandwidth just to keep the response flatter up to 20kHz. And TBH some people can hear over 20kHz. Not that I'd notice these days. :-/

Brunch....
 
I may be mistaken, but I got the distinct impression that a fast rise time of a transient could entail component frequencies which easily exceed 20kHz.

Yes. But it becomes a question of if you can hear their presence/absence. Tricky, if only because, say, a loudspeaker may distort the result and confuse the issue. IIRC it is accepted that some people can hear up to more like 24kHz or higher. So it tends to be better to err on the side of allowing a wider bandwidth even if it annoys amp and speaker designers. :)
 
However you may need to avoid having a filter that affects what *is* audible. Hence I tended to go for a wider bandwidth just to keep the response flatter up to 20kHz. And TBH some people can hear over 20kHz. Not that I'd notice these days. :-/
Put a filter at 50 kHz then and call it a day.
 
Yes. But it becomes a question of if you can hear their presence/absence. Tricky, if only because, say, a loudspeaker may distort the result and confuse the issue. IIRC it is accepted that some people can hear up to more like 24kHz or higher. So it tends to be better to err on the side of allowing a wider bandwidth even if it annoys amp and speaker designers. :)
Yes, some young people can hear frequencies above 20 kHz if they are quite loud. The spectral density of music tends to follow a 1/f-ish curve, so those high frequencies are still too low in amplitude for anyone to hear, and even if they could, they'd be masked by the much louder content down below 5 kHz.

And then, as you say, there's the speakers. The frequency response of many falls off a cliff around 25 kHz. Some extend another 10 kHz, and a few go past 50 kHz. Pretty much the same is true for microphones used in music recordings.
 
I may be mistaken, but I got the distinct impression that a fast rise time of a transient could entail component frequencies which easily exceed 20kHz.
In case you didn't know, rise time and bandwidth are related. Given a bandwidth, we can calculate the minimum rise time possible. Conversely, given a rise time, there is a minimum required bandwidth.
 
If curious have a look at
http://jcgl.orpheusweb.co.uk/history/sa_1990-2/ChaosAndNoise.html
and read the bits on 'chaotic oscillations'. There is also a pdf of the old
article I wrote on it for EW here:

http://jcgl.orpheusweb.co.uk/history/sa_1990-2/AToolkitForChaos.pdf

You can see how the combination of nonlinearity, a 'state memory', and feedback can go Ooops! No real audio amp is likely to do this if the designer had a clue. But effects like this can catch out the unwary. e.g. in low-bit processing of IIR filters, but also in analog stuff.

Why didn't some designers have a clue at the time of Otala's article? Or in other words why were they designing amps which measured good but sounded bad?

Was it because there wasn't a measurement thus the problem was either unknown or ignored?
 
Why didn't some designers have a clue at the time of Otala's article? Or in other words why were they designing amps which measured good but sounded bad?

Was it because there wasn't a measurement thus the problem was either unknown or ignored?

I'm not sure who in audio did/didn't know. Most of the fuss tended to be in the user/consumer mags. In practical terms I suspect most decent amps were fine for reasons given above. e.g. There is often very little in recorded music above about 20kHz. Back then the ability of things like mics (as has been pointed out above), disc cutters, and then cartridges and speakers tended to make mince of anything at 20kHz or higher that tried to be at a high level.

So in practice the snark was a boojum to a large extent. The difficulty was some setups and source material might do it, but others wouldn't. But in general, people would be OK if they'd bought decent well-regarded kit.

Many mics have had a ring-and-die well *below* 20kHz. People still use these because they like the sound and know how to set them up. Similarly if you look at what can be cut into a stereo groove, it dies at HF, particularly at the end of a side of an LP. Cf these as examples

http://www.audiomisc.co.uk/HFN/LP1/KeepInContact.html

http://www.audiomisc.co.uk/HFN/LP2/OnTheRecord.html

I've tried to do a similar survey of mics. But many have no reliable published measurements...
 
Why didn't some designers have a clue at the time of Otala's article? Or in other words why were they designing amps which measured good but sounded bad?

Was it because there wasn't a measurement thus the problem was either unknown or ignored?

Engineer's designed amplifiers to have a very low THD distortion figures into 8 ohms at 1 kHz because they were instructed to do so by the sales team. This was because a large proportion of the public were heavily weighting this figure in their purchasing decisions. This still goes on today albeit with different largely irrelevant figures getting the marketing emphasis.

Whatever the marketing term (SID/TID/...) what is the basis for assuming a competently trained engineer of the time (or decades earlier for that matter) would not be aware of this limit to linear behaviour? Don't know about the earlier authors but the last author John Curl posts on audiophile forums where his level of technical competence can be determined separately from his audiophile credentials.
 
Engineer's designed amplifiers to have a very low THD distortion figures into 8 ohms at 1 kHz because they were instructed to do so by the sales team.

If I understand it correctly, according to Otala some Engineers designed amplifiers to have a very low THD distortion figures by restorting to high levels of negative feedback and this was the cause of the audibly bad sound. Otala's TIM measurement allowed Engineers to balance the two perhaps?

The BBC was also after measurements that better correlated with listening:

A new distortion measurement - Better subjective-objective correlation than given by t.h.d.
by R. A. Belcher, B. Sc., Ph.D., M.I.E.E., BBC Research Department

https://www.keith-snook.info/wirele...s-World-1978/A new distortion measurement.pdf
 
If I understand it correctly, according to Otala some Engineers designed amplifiers to have a very low THD distortion figures by restorting to high levels of negative feedback and this was the cause of the audibly bad sound. Otala's TIM measurement allowed Engineers to balance the two perhaps?
It's not the negative feedback per se that causes problems. As discussed in the paper I linked above (also by Otala), the distortion occurs if the amplifier's open-loop bandwidth is exceeded and this leads to internal voltage clipping.
 


advertisement


Back
Top