advertisement


Has anyone bought a product because of a recommendation on ASR?

I've never bought anything because of an ASR recomendation, but I have bought things based on the measured performance in the reviews, that wasn't available elsewhere. Gustard A18 and Topping pre90. I still use the Gustard, the pre I sold after a couple of years because it sounded identical to my BPBP ( ie sounded like nothing) and I didn't use the remote as much as I thought I would, so it was just cash sat on a shelf.
Now it is your cash and sat on my shelf. Sounds like nothing is all I want from a preamp.
 
I’m sitting down for this one…….
No mention of errors in his results different levels of attenuation on the front end of the AP analyser for testing different DACs (depending on their potential output level), these all affect the internal noise of the AP, and affect the results. When you are down at -120dB and lower these things matter, I reckon if you added sensible error bars to his results then his great SINAD graph ranking everything would look a lot less cut and dry.

And you can’t call testing a group of 1 of any mass produced product really conclusive imho.

I’ve not been on ASR for a while as I decided the ‘science’ bit was not presented with enough detail to be that useful.

I respect what he is trying to do (I teach which involves the design and build electronics, I’ve access to an AP test set and occasionally use it to test electronics) , but the forum especially extrapolates everything far beyond reasonable science.
 
If that's the case then the "screwy RIAA curves" would show up in the testing and be expressed in measurements, as an ex electronics engineer I'm baffled as to why you would not want rigorous testing of all equipment to ensure it met with specification and thus enable it fit for purpose
It does show up in the tests, but it doesn't kill the SINAD which could still be amazingly good.
My point is that ASR puts very heavy weight on this particular measurement to the extent it routinely uses this number to rank product, and that can be highly misleading as an indicator of real world performance. That's all.
 
Bruno Putzey's Balanced Pre (I guess). A DIY-design by the designer behind the Hypex UCD/NCore amps and one of the co-founders of Purifi and MolaMola :)

That's like me buying a motorbike based on HP, torque , and not actually sitting on it, see if it fits me !
This. ASR reviews seem to be focused on producing a number which can then be ranked, without proper consideration on whether that number is in anyway meaningful. Also, the whole thing seems so fake objectivist because the reviews insist on not accepting that there are other reasons for buying (expensive) audio gear. Exacty like any other high-cost/high-involvement purchase, most people take many other factors into account - and high-end anything is usually terrible value for money anyway.

I wonder what Amir would make of a Sugden A21 or a DIY Linsley-Hood amp which both sound fabulous but measure pretty terribly IIRC?
 
The a21 measures ok, before clipping.

BPBP yes as above, a balanced preamp circuit, well buffer with attenuation, given away as a DIY circuit. Correctly built offers better than -100db on that all important snr, also very low noise and excellent Imd performance.
 
I know that @sq225917 and @tuga contribute on ASR perhaps they will comment?

As do I.

The majority of my system is comprised of some of the best measuring equipment on ASR.

I believe that if adding more distortion through the replay system is what musicians intended they would have done so during the recording stage.

The only way to tell if a 'replay' system sounds good is to stand next to a musician playing a piano, cello, double bass, saxophone or other acoustic instrument.

With no 'replay' system.

Because that is what authentic reproduction of sound, should sound like.
 
Soundstage have some tests in their suite I've not seen elsewhere e.g. phase response for filters. This brings to light some unusual behaviour one wouldn't normally expect e.g. https://www.soundstagenetwork.com/i...g-converter-measurements&catid=434&Itemid=577

The point is you can always expand the tests to get a better idea of the device under test. And different manufacturers focus on different things.

In terms of ASR, I like the multi-tone test but output impedance is MIA.

For testers in general, I often wonder about testing adverse EMI/RFI, probably not done because there isn't a standard, I think testers if anything go out of their way to ensure a clean environment in that way to avoid confounding variables?

Since we tend "to get what we measure" such questions are important.
 
Last edited:
Has anyone bought a product because of a recommendation on ASR ?

What product?

Did it meet your expectations (from its review)?

Did it replace and outperform a more expensive item?

I’m interested in some real word experience, from people who put their money down.

.sjb
Partially yes. I bought a WiiM Mini based both on the ASR review (or more accurately it's measurements when used as a digital throughput only) and that of Stereophile (IIRC) who also reviewed it, and also published measurements, (that interestingly didn't tally entirely with ASR's measurements, but still were good results). When it comes to digital (itself not including analogue stages), I 100% believe measurements tell you absolutely everything you need to know about the audio performance of the product.

As for anything else, no I wouldn't buy anything based solely on ASR's reviews and as above the only measurements of theirs I would even take in to account is of the purely digital performance of any product. Their use of SINAD as the single gold standard of performance is misguided and so I ignore their overall product reviews, and when it comes to speakers, they only ever review small "pro" ones or small old ones. I have no interest in such products, no matter how "flat" they measure.
 
Peter Walker's design goal for the Quad electrostatic loudspeaker was the lowest distortion possible, that's the D in SINAD.

That was back in the 1950's.

It has always been meaningful.
Indeed, Peter Walker was a consummate engineer. It is perhaps fortunate that he chose a type of loudspeaker that, in combination with a room, produced sound that still does a very good job of recreating the original performance and not just an impressive set of figures. Quite what the room does to the measured performance is a moot point but happily the end result is reproduction at home that sounds remarkably like the original music, at least in the case of classical acoustic music.

Rumour iirc has it that he used a Sony music centre for playing music at home so one might guess his interest was in engineering a superlative product rather than using it himself.
 
I have heard one, in an impressive looking system, and it sounded very good. Nine years ago, mind.
29395977051_83226ba59e_c.jpg
Apogees? or are they clones? (not that I'm aware of any clones, but it's been years since I've looked in to Apogee speakers).

Apogees are a speaker that I'd love to hear some day, never did get the chance at any of the hifi shows I went to, but even since Alvin Gold used them as part of a a review for a Krell amp (can't recall which), I've held Apogees on a weird kind of pedestal.
 
standardised test results
But not standardised methodology. One of my problems with Amir's testing is that when he tests a product, he makes a decision about what input voltage he'll test it at and then publishes the results for that voltage. The problem being, he doesn't stick to the same input voltage for all products. That's a problem, and a fundamental testing methodology flaw. I have some sympathy for his rationale for why he does it, but it leaves the test results between products being incomparible on principle, because they're not like for like tests. If he want's to do what he does, he should add the non standard voltage input test for specific products in a suplimentary section of the results, but the result for that test should not form part of his scoring or his "dashboard".

And yes Amir's total refusal to accept when he's made mistakes, in my mind at least makes his results unreliable.
 
Publishing the errors in his testing from the changing input voltages and attenuation in the AP analyser could help clarify the impact of these changes (anyone remember putting error bars on their data points in physics?). Also he tests into a 100kohm load for unbalanced outputs from his DACs, what is a typical input impedance of a pre-amp? I think my two here are about 20k, so if you are drawing 5x the current, your output stage is under a significantly higher load, will that affect its performance?
 
what is a typical input impedance of a pre-amp
Varies significantly. For example, two I know of are: my Krell was circa 90kohms, my current Hegel is closer to 10kohms.

NB: sorry that's for Balanced input. Unblanced IME the majority of manufacturers are around 47-50kohms
 
Has anyone bought a product because of a recommendation on ASR ?

What product?

Did it meet your expectations (from its review)?

Did it replace and outperform a more expensive item?

I’m interested in some real word experience, from people who put their money down.

.sjb

Nope - measurements are overrated. 💪
 
A lot of us use DACs direct to actives or power amps, so 10kohms typical.
My first and only post on ASR was responding to a question about how good my type of DAC was direct into a power amp. I noted that I didn't think the pre section in it was all that great and that I thought it sounded better through a decent preamp.

Apparently I was just being an idiot for thinking the question could be answered by comparing the sound (or I just had expectation bias). Their was lots of talking about me, but never to me, so I deleted the post and listened to music instead.
 
I have never bought a product based on a review from ASR review, I would totally discount them. A bunch of deaf charlatans in my view.
 
To my mind, measurements are an essential part of product development and QC, but a less essential part of comparative evaluation. The thing is, we don’t really have a firm grip on what factors matter most, in affecting our perception. We don’t have science (as far as I’m aware) that understands how our ear/brain responds to sensory input, at least not down to the sort of levels we measure with the Klippel stuff. So we don’t have correlation between measurements and subjective experience beyond a fairly gross generalisation that ‘X tends to be perceived as’ whatever.

So my take is that ASR is probably fine for discovering the badly designed or poorly executed products, but not reliable for ranking products in terms of how well they convey the musical experience.
 
Apogees? or are they clones? (not that I'm aware of any clones, but it's been years since I've looked in to Apogee speakers).

Apogees are a speaker that I'd love to hear some day, never did get the chance at any of the hifi shows I went to, but even since Alvin Gold used them as part of a a review for a Krell amp (can't recall which), I've held Apogees on a weird kind of pedestal.
Apogees, but rebuilt. Sadly they wouldn't work in my room, as I discovered when I tried Quad 2805s; I couldn't get them far enough from the front wall.
 


advertisement


Back
Top