Skip to content

Oh, My Paws and Whiskers, I’m Late!

Share this Post:

One of the problems with digital audio is latency. Analog audio happens in real time, traveling down a wire from point A to point B at a speed of roughly 186,000 miles per second. That's pretty friggin fast, even if you're in a Lamborghini Countach.
If you amplified a signal in New York and sent it down a really long wire to a loudspeaker in California you'd pretty much hear it in CA instantly (assuming that your source can overcome the resistance of 3,000 miles worth of copper). There's essentially no delay across wire with analog audio.

 

Analog to Digital Delays

 

In spite of what we appear to be hearing, digital audio is not happening in real time. It's close, but the fact remains that converting an analog signal to digital audio data requires time. How much? Typically, a couple of milliseconds. That's not much, but envision this: You are overdubbing a guitar solo into a digital audio program. All recorded audio must be converted to digital information and so must pass through an analog-to-digital converter (A/D). That takes a few milliseconds. Then the signal is routed to the to the computer's software (probably another few milliseconds). If you have a track in "record ready" and the track is passing input signal, that signal is also being converted from digital audio back into analog audio, out of the monitor path to your ears. Total time lapsed: anywhere from 5 to 50 milliseconds.  Fifty milliseconds is enough time to throw off the timing of a player. A note is struck on the guitar and a short time later, the sound of that note is heard through the speakers. Net result: performing the overdub is very difficult.

 

DAWs have a multitude of ways to deal with this: Some provide "low latency monitoring" which helps alleviate the problem to a tolerable degree. Others offer a cue path that outputs an analog signal to the monitor path before it is recorded into the DAW. Of course this means that the playback audio takes a slightly different path than that of the monitored audio. Sometimes the levels vary between input and output, making creative judgment difficult, and if you are monitoring the signal with an effect plug-in, you may hear it only on playback. At best, monitoring like this is clumsy and a royal pain in the arse. Part of the attraction of using an analog desk to monitor DAW outputs is that you can set two channels for overdubbing: one for monitoring input, and the other for playback. Still very clumsy.

 

Now let's take that concept into the live arena. You are mixing on a digital console, perhaps with an interface box at the stage. Location of the interface doesn't really matter. What does matter is that the interface will have A/D converters for your inputs from the stage. Those inputs are mixed in your console to stereo. At some point, the stereo signal is converted D/A, whether it be on the way out of the mixer or after a drive processor and so on. Another conversion means more latency.

 

Time-aligned Latency

 

If all of your sources are on the stage, this latency is really not an issue, because all of the signals will have the same delay, and they will be time-aligned to each other. And in a live performance venue, you are so far from the stage that the latency is way less significant that the delay caused by your distance from the stage. You know – drummer whacks snare drum, you hear drum a bit later. Take that system to an outdoor festival and you can have coffee between the time you see the snare hit versus the time you hear the snare hit (because light travels so much faster than sound). In other words, when you're mixing front of house, even 15 milliseconds of latency is not going to change your life, as long as all of your channels are late by the same amount of time.

 

But what happens when the signals are not latent by the same time? Hold on tight and follow this. You have a drum kit on the stage with multiple microphones: kick, snare top, snare bottom, hat, etc. All of these mics are connected to a digital console and so are subject to A/D on the way into your mixing console. Now, you like this mixing console, but the compression on it doesn't blow up your skirt. You'd much rather use that Universal Audio 1176 you have in your rack for the snare top. So you use an analog output on the console's ‘house' rack to route the snare to the UA1176, then from the 1176 back to an analog input (a typical send/return scenario). This process adds one D/A (into the 1176) and one A/D (back to the console). These conversions take time so now your snare top microphone channel is a few milliseconds late relative to the snare bottom mic. It's also probably phase-shifted relative to the overhead mics. This is clearly audible.

 

Channel Delay

 

Fortunately, there are ways to compensate for this problem. Many digital consoles provide "channel delay," which allows you to delay certain channels so that they ‘wait' for others. In the above situation, one would need to apply channel delay to all of the channels except the snare top, allowing the snare top to ‘catch up' to the other channels after its trip out of the system to an external processor.

 

In Avid's D-Show  software, "delay compensation" is used to compensate for the lag created by certain plug-ins. In regards to dealing with this in the VENUE system, Robert Scovill, Avid's senior market specialist for live sound products, suggests the following:

 

"You can use the channel delay to get two (or more) inputs aligned by simply using the channel delay with the units set to ‘samples' and with the ‘fine' control engaged." (Double-click on the "Fine" button in the global modifier button set. It should stay lit green).

 

"I generally do this by one of two methods," Scovill adds:

 

1.    Route the internal noise generator to both channels. Set them to the same level and start adjusting delay on the early signal until you hear the comb filter disappear. Or you can reverse polarity on one signal, and adjust delay until the signal disappears and then remove the polarity reverse.  

  

2.    Route the internal noise generator to both channels, pan one left and one right and then compare and adjust for minimum phase shift in the phase display of Smaart. In my opinion, this is the most accurate way to do it. Make sure you have the channel strip EQs bypassed in order to accurately evaluate phase at the top end of the spectrum.

    

"It's important to note though that VENUE only automates the time alignment process for differences in output path lengths – not inputs – using the built-in Delay Compensation. But keep in mind, it lengthens your overall console throughput by the longest path created in the output stage of the console."

 

There you have it.

 

The Midas XL8 (which I had the pleasure of using last night) provides a similar feature, allowing one to delay all channels except those with inserts assigned. Each channel type, or "layer" – for example, input, aux, master or matrix – has its own parameter controlling delay compensation for that layer, enabling the aforementioned issue to be corrected.

 

As Robert mentioned, any delay compensation adds to overall system latency. For the front-of-house engineer, that change would be equivalent to moving the mix position back a few feet. Monitor engineers, however, may not be so lucky – channel delay could add enough latency to a sound that a musician would notice it in their mix (particularly if they are on ears). In that case, it might be better to stay "in the box" and avoid potential timing issues.