DevillEars
Dedicated ignorer of fashion
Interesting, is there really a machine that can measure soundstage? If we play an identical recording through different dacs, some would hear different soundstages. But the data would be identical so I'd assume a machine would measure the same for both, yet the brain would disagree.
I do find the whole thing fascinating. A clarinet has a certain waveform, a piano has a different waveform, and so on. Yet when we mix a whole band together, the speaker can't independently move for each instrument. It moves in a certain waveform that conveys each instrument, different tones, volumes, chords as a single waveform. The brain, effortlessly puts this together and hears it as seperate things. I'm surprised if any machine can match this feat.
Careful! We're creeping into psychoacoustics territory...
The ears+brain work together in strange ways to recreate a "soundstage" - lateral positioning is mainly decoded from relative left-right emphasis while depth is mainly decoded from phase shift detection. Much of this is from a mix of direct and reflected signal. The key is the way in which the brain "decodes" the complex combination of waveforms received by the ears and passed to brain as electrical impulses.
If we accept this explanation of how we perceive a soundstage then any attempt to construct a "machine" that can both detect and measure "soundstaging" is likely to be one monumental exercise. In order to create such a machine, the creator (not capitalised) would need an accurate and detailed understanding of just how this perception is done and I'm not sure that science has enough information and understanding of the ear:brain functioning to enable such a machine to be built.
The other aspect lies in the relative intensity of some of these spatial cues used by the ear:brain to conjure up these perceptions of depth (particularly) as the intensity levels of some of the depth cues is very low in relation to the "musical information". The audio rig needs to have a very low noise floor to enable the listener's ear:brain to detect enough of these cues to re-create the soundstage and if the noise-floor is too high, these cues will be masked and render the soundstage erratic in its presentation (due to music's continual variations in intensity).
Obviously, if we detect depth from phase shift cues, then the audio rig needs to maintain phase accuracy to allow this perception function.
So, what may be a feasible solution - rather than building a machine that can re-create a soudstage and measure it - is to rather build a machine that can detect and measure the various types of spatial cue "enabling" characteristics (e.g. noise floor, phase accuracy, left:right amplitude accuracy, etc.) and, based on whether or not these measurements fall within some yet-to-be-defined limits, to give a mechanical equivalent of a thumbs-up or thumbs-down to the system's potential in this area.
But I'm merely hypothesizing...
Dave