jimification
pfm Member
I notice that the concept of "accurate" is frequently used as a kind of "Holy Grail" term in the exotic world of HiFi. As someone who has done a bit of playing, recording and mixing, this notion of "accuracy" as a fixed target in music reproduction seems very odd! I'll try to explain why by following an instrument through the recording chain....
If we were standing in a "live room" (the space where the guitar and amp are recorded) listening to an electric guitar, we'd hear the full range of tone from the guitar amp / speaker. We could consider the sound to be "accurate" at this point. ie: what it sounds like to a listener "in the room".
The recording engineer typically captures this sound by placing a microphone in front of one of the speaker cones on the amp / cab. Guitar speaker cones are quite large (usually 12") and there is a VERY wide tonal palette available depending on exactly where the mic/s are placed - from thick and dull at the edge of the cone to thin and piercing at the cap (You can even hear this if you ever stand in front of a guitar speaker and move your head around a bit - they are VERY "beamy"). Sometimes multiple mics are placed to get a blend of sounds (trickier to do as it can induce phase issues) or another "room mic" is placed further away to capture some room ambience into the overall sound.
If we go to the control room and listen back to the recorded guitar performance, what we are now hearing is a heavily filtered interpretation of the guitar amp sound based on the mic type(s) and on where the engineer decided to place them on the speaker cone. In fact, even at this stage, guitarists often complain that this sound doesn't sound "accurate" to them, because what they are used to hearing is the: "amp in the room" sound, not the "recorded" sound, which can be quite different.
Once the tracking (recording of different instruments) is done, the mixing engineer will adjust that recorded guitar track to sit better in the mix. He might apply a high pass filter to allow the bass guitar some breathing space, he might apply some panning and more eq to make room for vocals or keyboards, he might apply acoustic effects: (reverb, delay, phasing, flanging, chorus etc.). He might ask the guitar player to record the same guitar track multiple times with the mics in different positions (double / quad tracking) to create an overall sound to benefit the track. The aim is for the band itself to occupy the full range but each instrument to only occupy a smaller space in a "tidy" way within that. If you were capturing a solo instrument you might want it to cover a much broader range to present more of a full spectrum of sound but in the context of a band, the sound of each instruments is extensively changed and tailored to fit better into the whole.
After the tracks are mixed, the mastering engineer goes to work, changing the sound again to make the album as a whole sound more cohesive and as good as possible on its final formats. Perhaps we could consider this point in the chain to be "accurate"?....Probably not for the instruments themselves, due to the extensive process mentioned above. Even for the track as a whole, the mastering engineer is likely not targeting what he hears on his console, through his speakers, with his ears, to be the final "arbitrator". He's aiming for it to come to fruition at its best on a range of formats and a wide range of reproductive equipment from hifi's to TV's.
So I would suggest that, for many recordings, the audio pipeline is aimed at making an end product that sounds "good" and any "accuracy" is discarded very early on in the process. There are, of course, some recordings that aim at accuracy but even with something as apparently straightforward as a solo acoustic piano, that's extremely difficult to achieve and due to the recording process, is still at best an interpretation.
If we were standing in a "live room" (the space where the guitar and amp are recorded) listening to an electric guitar, we'd hear the full range of tone from the guitar amp / speaker. We could consider the sound to be "accurate" at this point. ie: what it sounds like to a listener "in the room".
The recording engineer typically captures this sound by placing a microphone in front of one of the speaker cones on the amp / cab. Guitar speaker cones are quite large (usually 12") and there is a VERY wide tonal palette available depending on exactly where the mic/s are placed - from thick and dull at the edge of the cone to thin and piercing at the cap (You can even hear this if you ever stand in front of a guitar speaker and move your head around a bit - they are VERY "beamy"). Sometimes multiple mics are placed to get a blend of sounds (trickier to do as it can induce phase issues) or another "room mic" is placed further away to capture some room ambience into the overall sound.
If we go to the control room and listen back to the recorded guitar performance, what we are now hearing is a heavily filtered interpretation of the guitar amp sound based on the mic type(s) and on where the engineer decided to place them on the speaker cone. In fact, even at this stage, guitarists often complain that this sound doesn't sound "accurate" to them, because what they are used to hearing is the: "amp in the room" sound, not the "recorded" sound, which can be quite different.
Once the tracking (recording of different instruments) is done, the mixing engineer will adjust that recorded guitar track to sit better in the mix. He might apply a high pass filter to allow the bass guitar some breathing space, he might apply some panning and more eq to make room for vocals or keyboards, he might apply acoustic effects: (reverb, delay, phasing, flanging, chorus etc.). He might ask the guitar player to record the same guitar track multiple times with the mics in different positions (double / quad tracking) to create an overall sound to benefit the track. The aim is for the band itself to occupy the full range but each instrument to only occupy a smaller space in a "tidy" way within that. If you were capturing a solo instrument you might want it to cover a much broader range to present more of a full spectrum of sound but in the context of a band, the sound of each instruments is extensively changed and tailored to fit better into the whole.
After the tracks are mixed, the mastering engineer goes to work, changing the sound again to make the album as a whole sound more cohesive and as good as possible on its final formats. Perhaps we could consider this point in the chain to be "accurate"?....Probably not for the instruments themselves, due to the extensive process mentioned above. Even for the track as a whole, the mastering engineer is likely not targeting what he hears on his console, through his speakers, with his ears, to be the final "arbitrator". He's aiming for it to come to fruition at its best on a range of formats and a wide range of reproductive equipment from hifi's to TV's.
So I would suggest that, for many recordings, the audio pipeline is aimed at making an end product that sounds "good" and any "accuracy" is discarded very early on in the process. There are, of course, some recordings that aim at accuracy but even with something as apparently straightforward as a solo acoustic piano, that's extremely difficult to achieve and due to the recording process, is still at best an interpretation.