The recent crash of a German airliner — a disaster brought about, apparently, by the willful act of one of its pilots — triggered a predictable discussion about the role that automation might play in avoiding such catastrophes in the future. But it also underscores just how much robotic automation — i.e., machines that can adaptively reconfigure their actions to changing circumstances — we already have in our daily lives.
Staying with the aviation theme for a minute, most passengers don’t realize that the majority of commercial airliner landings are fully or partially automated, with lateral localizer and vertical glide slope guidance fed directly from an airport’s instrument landing system (ILS) into the cockpit’s autopilot via RF. Even fewer of us remember when cockpits had three pilots, the third being a flight engineer to look after engines, hydraulics and other systems. That job description went the way of the fireman on locomotives after diesel-powered trains became the norm (though not without lots of yelling on the part of those firemen as the caboose pulled away) as software took over a lot of the systems-monitoring tasks. That train that takes you between terminals at the airport? Driverless, totally automated. Cameras that track your movements at airports and elsewhere are programmed to look autonomically for certain behaviors, lock onto subjects that exhibit them, and simultaneously alert a human in security as to what’s going on.
Will the flight deck of a passenger airliner ever become fully automated, filling the space normally reserved for two highly trained (though potentially erratic) meat popsicles with robotic systems featuring 15 layers of redundancy? Probably not in our lifetimes, if only because getting passengers to step into a sealed aluminum tube and be lifted six miles above the ground controlled only by what looks like large laptop is going to be a tough sell. But before that time is up we may see some software sitting in another seat we thought would always be reserved for carbon-based life forms: the FOH mixer’s chair.
Remote Control
We already have a substantial amount of automation available on a digital audio mix console, such as snapshot recall of critical parameters that let the mixer set up shows and individual songs at the press of a button. But both researchers and venture capital types have been making noises about taking that kind of technology to the next step, which would involve letting an algorithm do the actual mixing of a live multichannel music performance.
The Center for Digital Music (C4DM), at Queen Mary University in London, is home to a few computer scientists who have been pursuing just that for the past several years. Three researchers there, Enrique Perez Gonzalez, Stuart Mansbridge and Joshua D. Reiss, developed a series of software-based automatic mixing tools that can handle tasks like panning, spectral analysis, feedback prevention and, ultimately, automatic level control (Check out a few videos of their algorithms in action at http://bit.ly/1KDeY4W).
Their “target” goal seeks to synergize “rule-based” mixing, in which certain actions are taken automatically in reaction to specific events (hit a certain SPL point, reduce output X dBs — some installed sound systems already make use of these types of features in rack-mounted automixers used in corporate boardrooms and schools). But they’re also pursuing subjective mixing, which is what the guy over there eating a sandwich does, using algorithms based on existing mixes that serve as machine-learning tutorials. “The target mixing methods rely on output-feature similarity to the reference features of the target mix,” they write, in Sheldon-esque academese, on the Center’s web page. “It is the current belief… that the use of expert training data can be used to increase the convergence rate of the system.” In other words, the better the mixes used to “train” the algorithm, the better the mixes it will give you back, which in turn provide even better training data.
While this work is intended in part to “ease the mixing task to the audio engineer,” its ultimate goal, they state, is somewhat more existential: “…pursuing the knowledge required to develop automatic mixtures comparable in quality to those performed by professional human mixing console operators.”
If this seems a bit esoteric, be advised that at least one commercial venture has come out of it. MixGenius, which offers online audio mastering services guided by algorithms, lists an executive team that includes C4DM team alumnus Stuart Mansbridge as its CTO. Meanwhile, at the most recent SXSW conclave, Larry Marcus, whose Walden Venture Capital firm lists music ventures including Pandora and SoundHound on its website, reportedly proclaimed at one panel event, “Algorithms can do a better job at live mixing than most people.”
For now, they don’t, and it’s questionable if they actually could be able to do everything necessary to create an aesthetically pleasing mix under the same variety of unpredictable conditions and circumstances that FOH and monitor mixers have always done. But what is reasonably certain is that:
A) The technologies are there that will incentivize people to take a shot at it, such as the auto-calibration algorithms a growing number of speaker manufacturers are integrating into their monitors that can sense and adapt themselves acoustically to different environments, and
B) There are substantial economic incentives in the increasingly corporatized touring/event market that will be looking for ways to apply automated systems to their investments. If you’re 50 years old, this is likely not going to significantly affect your career. If you’re 25, you need to think about this.
The best approach to creeping automation is sort of what you do anyway, which is to stay abreast of technology developments. Most tend to do that (automatically, you might say) within the live-sound silo. That’s what needs to change. The new forces of automation are coming not from the pro audio world but rather from the IT universe, which is where most of the AV business is ultimately headed. You don’t need a degree in robotics to be able to follow some of the key trends developing around IT and automation, such as artificial intelligence and machine learning. Aside from being pretty interesting in and of themselves, some understanding of what they mean will allow you to figure out how you’re going to fit into a changing landscape.
Secondly, as always, follow the money. If it’s not Larry Marcus, it’ll be someone like him who will whisper in the ear of someone at Live Nation or AEG Live that if some 2025 version of Britney is doing the same show over and over again every night in the same venue, maybe we can get a Roomba to mix it and save a few bucks. As is always the case in the brave new world of man vs. machine, you’ll need to figure out your value proposition. Being able to program a Roomba may certainly help.