We automate lighting, ProPresenter, video cues, and other non-audio things in our services, so why not audio? The answer: Why not audio indeed? Mix automation in the recording studio is a standard practice, and has been since the mid-1970s, when studio track counts mushroomed from maybe eight or 16 up to 24 or even 48 or more, with multitrack tape machines synchronized together. It’s not terribly difficult to ride gain on only eight or even 16 channels, but when we get into multiple dozens, it starts to get beyond our capacities, what with having only two hands and all. With channel counts in our services steadily increasing these days, it becomes a bit of an issue. For most of us, the solution is to group all those drums together on a single VCA (or two or three), and gather all the vocals on one as well, and same with guitars, and so on. As during the late 60s/early 70s, it’s not hard to handle eight faders. But imagine what wonderful mixes we could create if we magically had dozens of hands available to move faders, push mute buttons and turn pan knobs — among other things. As more and more systems come under the control of automation in worship, maybe it’s time to start integrating this notion into your audio.
Getting in Sync
Synchronization is a necessity if we want to make automation happen. We have to choose one master clock and synchronize everything else with it. There’s no way around it, since any changes to the mix must happen at very exacting moments in time. A common choice for the master clock is the digital audio workstation/sequencer used by the worship team — which is almost universally Ableton Live. Other DAWs can be used, of course, and there are clever DMX-512 lighting control interfaces that include DAW plug-ins that facilitate moderately sophisticated light programming. Time code and MIDI messages are transmitted to computers and other hardware that facilitate not only control of lighting, but also trigger changes in ProPresenter or other apps that project song lyrics and other visuals. Pyrotechnics and other non-lighting cues can be fired as well. Personally, lighting, video, and other visually-oriented production elements are a bit outside my wheelhouse, so now we’ll turn our focus to the really important stuff — sound.
There are many advantages to automating various technical aspects of church services. Probably the most obvious is the elimination of human error. I’ll be the first to concede that I’ve gotten distracted and failed to unmute the pastor’s headworn mic until the lighting guy tapped me on the shoulder. And how many times have the lyrics stayed on screen for a few moments too long, leaving a few congregants bewildered about what to sing? Such cues can be automated so they’re never, ever missed. Obviously, technical failures happen from time to time, but ultimately the struggles can be overcome, and after enough repetitions, the kinks can be eliminated. And not only can human error be overcome, but so can human limitations. As a keyboard player and synthesist, I fancy myself as capable of rhythmically pushing buttons and turning knobs or moving faders in a precise, consistent way. But in fact, I could never pan an instrument as smoothly or as precisely as a computer. Nor could I press buttons with the speed and rhythmic precision necessary to create stutter effects. But indeed, a computer can do this stuff. It opens the door to creative production that simply cannot be accomplished by humans.
Start with Mutes
Numerous aspects of the audio mix can be brought under automated control. Topping the list due to its straightforwardness and utility is the simple muting of channels. We already set up mute groups on our mixers in order to limit the output to only those signals that are currently necessary and pertinent to the service at any given moment, so having a computer ensure that every channel is muted and unmuted at the appropriate times is very welcome. More muted channels equals less ambient self-noise passing through to speakers, and eliminates the possibility of extraneous audio rubbish being heard by congregants (think guitar noodling, privileged speech caught by a lav mic, or police radio calls picked up by a poorly managed wireless receiver). I am also imagining muting reverb returns for a song segment during which the worship leader speaks — praying or reading Scripture, for instance. I like to think of the notion of automating mutes as a type of “live mix assist” for a human at the faders.
Another relatively simple type of automation that can bring substantial benefit is toggling effects on and off and/or making program changes. As a studio mixer, I like to introduce processing that helps a song’s choruses to pop and stand out as compared with the verses — I use widening plug-ins, or introduce parallel saturation, among other things, to create additional excitement in the choruses. This could easily be accomplished with automation — enabling and disabling effects or unmuting and muting parallel channels at precise, pre-programmed times. Similarly, the changing of effects presets can easily be accomplished by automation, as can the control of guitar pedal boards, and keyboard patches can be changed on the fly. Even higher degrees of sophistication are available in terms of effects — in some cases, effect parameters like delay feedback levels can also be changed in real time.
On the Level(s)
Thus far, the kinds of parameters we’ve talked about automating are fairly simple — largely turning things on and off. For those who are not faint of heart nor afraid to venture where others fear to tread, there is the prospect of actually adjusting levels via automation. I portray this level of sophistication as a more difficult and challenging prospect, mainly due to the amount of work that is necessary to make it happen, and also partially stemming from the inherent perils of turning a computer loose with your mix. We’ve all heard (or lived) the horror stories of early live sound mixers with snapshot moving fader automation going haywire and opening every channel up wide to create the most horrific feedback loop in the history of mankind. Great caution must be exercised to avoid such disasters, and a large part of that caution is a healthy dose of humility — an acknowledgement that many (if not most) calamities created by computers controlling audio levels are self-inflicted human error. But extensive testing and trial-and-error can ultimately result in a level of confidence that emboldens us to give such automation a whirl. Panning an instrument or vocal to an odd location in the stereo image is not nearly the same disaster as suddenly and inexplicably cranking its level by 20 dB SPL, so perhaps putting motion by panning under control of automation is a likely baby step en route to higher degrees of sophistication. And rhythmic panning can be a pretty cool effect!
If automated panning works out well, then perhaps it’s time to move on to actually automating channel levels. An obvious candidate for such automation would be a momentary boost in the level of a lead guitar for a solo. Or maybe a smoothly-building crescendo over the course of the bridge in a worship song. Maybe we want the BGVs a little wetter during the biggest parts of the song, so we can push up the return faders of the reverb assigned to them. If we eventually get really bold, maybe we can even put the master fader(s) under automated control, and turn up the entire mix a dB or two during the chorus, or pull down the VCA to which all non-vocal signals are assigned while the worship leader prays during the breakdown of a song (with no reverb, since we’ve muted his send, remember?)
There’s no question that automation will continue to play a larger and larger role in worship — it is nice to eliminate the struggles created by a lack of competent volunteers — now we can just let a computer do it! This is not an option for all churches, but for those who can consider it, I strongly recommend doing so.
John McJunkin is the chief engineer and staff producer in the studio at Grand Canyon University.