advertisement


Blind ABX test shows difference between 44.1 and 88.2

Well, unless you can make an argument, I'd say you're making stuff up.

See Werner's post.

Paul
Sorry, Paul, had to leave for work before finishing post.

What would you think is a valid way of demonstrating the difference is audible as the last part of the quote indicates? Blaming the replay chain as a possible flaw in the blind testing seems a weak get-out to me.
 
I've long struggled with feeling that that digital sound, and I speak as an early convert, was in some way compromised, more so than can be explained by mastering differences. I' m lead to the conclusion that the problem fundamentally isn't with playback equipment - all that different DACs etc do is more or less mask the problem - but in the ADC process. In support of that I'd say at least in the field of classical music, I find modern digital ie recorded and mastered better than earlier digital sound. Werner's explanation also seems to validate this to me. Having said all that, it's not significant enough to stop me enjoying the music, which is the point of the exercise after all!
 
Those who are genuinely interested in the 4 year old paper which is the subject of this thread may be interested to look at this thread on hydrogen audio.

http://www.hydrogenaudio.org/forums/index.php?showtopic=82264&st=25

I recommend ignoring the bits by Arnie Krueger, and the discussion of the SRC (not because its pointless but because it has been canvassed above) and look at the comments made eg in post 40 about the statistical analysis. One of the authors of the paper joined the thread and made one post. Crucially she was specifically asked to provide the raw data and did not. Perhaps she just lost interest but having joined the discussion it seems a bit odd to me that she just dropped out when some rather penetrating questions were asked. I'm not massively impressed.

I have not looked at the paper, but on the strength of the points raised on that thread, which do not appear to have been answered, it seems distinctly possible that the statistical analysis may have been flawed because (if I understand it correctly) the significant results may have been cherry picked and an adjustment my not have been made for the likelihood of a few results appearing to be significant ie the classic problem of x% of people being "proved" to be psychic when x people out of 100 correctly "predict" the result of y coin tosses.

This is more fundamental objection than the use of a dodgy SRC because people can reasonably point out that record companies may also use dodgy src.

Statisticians and those who have read the paper feel free to correct me if I have misunderstood this.

btw on the HA thread there is a reference to forthcoming paper purporting to show significant results in DSD vs PCM. I tremble with anticipation.
 
I'd be interested to know, entirely without prejudice, whether there's a robust method of creating downsampled hi-res files so that one could do this kind of trial at home. That's to say, is there a piece of software that can downsample e.g. a 24/96 file to 16/44 without introducing artefacts that would render the comparison suspect?

Apologies if this is the height of naivety.
 
is there a piece of software that can downsample e.g. a 24/96 file to 16/44 without introducing artefacts that would render the comparison suspect?
I think the freeware 'Sox' is considered adequate, but I'm quite prepared to be contradicted.

Paul
 
Sorry, Paul, had to leave for work before finishing post.

What would you think is a valid way of demonstrating the difference is audible as the last part of the quote indicates? Blaming the replay chain as a possible flaw in the blind testing seems a weak get-out to me.
I've only read the abstract.

They found that downsampled 88k2 was distinguishable from 'native' 44k1. They found that on most material 'native' 44k1 was indistinguishable from 'native' 88k2. And on just some material the native rates were distinguishable, but the native rates are achieved by SRC in the ADC.

So unless they can show they've eliminated the varied sources of SRC as the issue then nothing much can be concluded about the sample rates as such.

Which is a shame, I'd really like to confirm that more bits and higher rates are necessary for ultimate fidelity, rather than just being a way of avoiding the use of badly engineered software.

Paul
 
I speak as an early convert, ... I' m lead to the conclusion that the problem fundamentally isn't with playback equipment ... but in the ADC process.... Werner's explanation also seems to validate this to me.

My above story holds from the early 90s on. With 'early digital' I would understand from CD's day-1, an era that differed fundamentally in production methods. I just list a few facts about the first decade.

-ADC chips were decidedly non-delta-sigma, e.g. successive approximation or so, running at the target rate, no oversampling, no digital anti-aliasing filters, complex multi-pole analogue AA filters.

-target rates during recording were non-standardised, with 44.1k, 48k, and 50k globally in use depending on the make and type of (multi track) recorder. Word lengths ranged between 14 (Denon) and 18 (Decca) bits. Probably most machines were 16 bit. Actual ADC performance was probably one bit less than nominal.

-mixing was generally done in the analogue domain, with playback through non-oversampling analogue-filtered DACs.

-there were a couple of all-digital workflows, notably Soundstream's. While it is easy to imagine that editing(*), and even the mixing of a limited number of channels, must have been feasible back then, it is not clear exactly how much processing could be done on these systems, how specialist effects were handled, and what sort of numerical accuracy was maintained. There is an intrigueing lack of documentation about this.

-industry as a whole was totally scared of hitting 0dBFS (imagine that!). As a result ADCs were hit several dB below peak input, while at the bottom of the range they were plagued by non-linearities, noise, and perhaps even raw quantisation noise. So, what was laid down on the tracks probably would have had ~12-13 bits of real resolution.

-when SRC was needed this was done with fairly crude digital hardware (Studer did some original research here, IIRC), or by cheating, with an additional DAC/ADC step.

Given this it does not surprise me that the first decade of consumer digital sounded less than optimal. Even so a number of fine recordings have been made in that era!


I'd be interested to know, entirely without prejudice, whether there's a robust method of creating downsampled hi-res files so that one could do this kind of trial at home.

http://www.pinkfishmedia.net/forum/showthread.php?t=141881



(* Tape editing was done with razor and splice tape. Moving this to a computer(like) system was what made Karajan exclaim 'all else is gaslight', not the actual sound, about which he probably cared not a jot.)
 
I'd be interested to know, entirely without prejudice, whether there's a robust method of creating downsampled hi-res files so that one could do this kind of trial at home. That's to say, is there a piece of software that can downsample e.g. a 24/96 file to 16/44 without introducing artefacts that would render the comparison suspect?

Apologies if this is the height of naivety.
No- it's a very smart question. The short answer is yes there are loads of them and Sox is great. It will do lots of other things besides.

The only qualification to that answer is that "introducing artefacts which would render the comparison suspect" is arguably the very thing one is seeking to test. So what sort of artefacts might be detectable? Well that's the problem. Are we asking about the level which the general body of human knowledge might predict to be detectable or the specialised body of audiophile knowledge which predicts that everything is detectable.

However people have distributed files for comparison over websites several times. I think Plutox has done it here as well as Werner.

Frankly I would not get worked up about trying to explain this study by reference to the artefacts of SRC because it has a very strong smell of what one might rather politely describe as naive statistics (cf Mayer & Moran which IIRC has had its raw data available for re-examination). The non availability of that data for the Pras & Guastavino paper is not trivial or nit picking.

That said it's great to try these things out for oneself. PS on that subject do have a go at the 24 bit vs 16 bit test http://archimago.blogspot.co.uk/
 
Adam,

I've looked everywhere so sorry if you have mentioned this previously, but what exactly is the system that you use for comparison purposes and general listening?
 
Adam,

I've looked everywhere so sorry if you have mentioned this previously, but what exactly is the system that you use for comparison purposes and general listening?
Not exactly sure what you mean by that but up until a couple of months ago I had been using an SBT into a ps audio perfectwave dac II into Genelecs 8040s and adam sub. I also used DRC in software with the filter created by audiolens. Sometimes I used a laptop into the PS audio instead.

However a couple of months ago I moved into my girlfriend's house. I have sold the PSaudio and the Genelecs and am listening via my trusty SBT directly into an inherited naim system with B & W 805s. I am running LMS on a nas which does not have the grunt for drc. The room is completely different from my old one, the location of the speakers and furniture non-negotiable. So right now I'm not really doing too much comparison listening, although plenty of general listening. I don't know whether I will bother getting an external dac. If I do I suspect it will be to do the drc on, as I'm more concerned about that than the limits of the SBT's mere 17 bit resolution (perhaps a dspeaker, an m-dac 2 or the new communitysqueeze thing if it ever comes out.)

Actually my ST was modded years ago by fidelity audio with better clocks etc so it will probably be somewhat better than stock.
 


advertisement


Back
Top