Skip to content

SkyNet: Your Friendly Touring Lighting Designer, Part II

We find ourselves, quite naturally, at a question – where does this leave humanity? The sorts of AI tech we discussed in our last column need not stop at lighting – the same sort of analysis / generation could fit into a theoretical framework for mixing music tracks, generating audio samples or even voiceovers in specific voices, or doing anything else within the realm of “content creation”. Splash in some advanced natural-language processing¹ and you’ve got yourself an entire show run by robots. Are we all to lose our jobs, destined to eke out an existence delivering pizza in the Metaverse and subsisting on chiseled SPAM? Where, in this techno-dystopian (optimism?) future, do we meatbags fit in?

Questions about what is to come are, of course, notoriously difficult to answer. Our current lack of personal jetpacks is a famous comedic example of an unfulfilled prediction, and Star Trek-style natural language processing exceeds our current capabilities by an order of magnitude. Any prediction about the future will be wrong, because it’s done from my armchair here in the present, and it’s impossible to know what’s next. We’re here to prognosticate though, so, if one were to extrapolate from current and historical trends, one could extrapolate thusly:

I believe that generative AI is, like all other technological help, a tool within our toolkit that we can (and should!) use to create beautiful things for our clients. Human creativity as a force of nature didn’t stop with waveform-based effects generators on lighting consoles², it was not stymied by Adobe Photoshop’s cool “AI erase objects” function, and it has not ceased to exist as thousands of artists throughout the eons have traded, stolen, swapped, been inspired by, and built upon the work of other human artists. Our brains have significant computational power, an ability to both see a pattern and anticipate changes to it, and create new unique patterns. Current AI systems lack the ability to think abstractly, and in fact they don’t have cognition as we understand it at all. Instead, we feed them absurd amounts of data, they recognize patterns and synthesize additional ones in a similar, predictive vein. This point is crucial: they are good at remixing that which they have previous experience with, and they’re good at guessing what comes next, particularly in instances where the dataset has a strong tendency to follow structure and rules, like languages. Yet it is the unexpected and un-anticipated moments that compel us. Before the holidays I programmed a Christmas tour where one of the songs was slow, moody, with a subtle piano arpeggio in the background. The song was low-key, but I ended up deciding that a very fast dimmer chase was the right background look for the song. It worked, with the subtle piano arpeggio in the background serving as the counterpoint of this otherwise slow song – and even I was a little surprised that I liked it. Would a generative system have come up with this? Perhaps.

But, whether such a system would or would not have hit upon any specific idea is beside the point. Predictive / generative systems will produce, more or less, an interesting amalgamation of the sum total of their input data; humans will, for the foreseeable future, be required to separate the good ideas from the bad, to tweak and build upon them. The funniest description of what they do that I’ve heard was “fancy autocomplete” – and thatis a bit more accurate than some of the shareholders would like to admit³.Regarding tools like ChatGPT and MidJourney: they produce output that are remixes of what they’ve already seen, and that are often “not quite right”, and require tweaking or prompt engineering to make usable. ChatGPT required intensive training by humans after initial prompts were written to get to the state it is now, and it continues to require training to improve the quality of its data. Similar training with other types of data – especially the unique sorts of datasets lighting works with – would require human sorting, grading, and intervention to get to a similarly advanced point. For the moment, such systems will create output that might be useful to programmers, but we’re not to the level of “Double-click here to program your show”, and it’s not clear to me that the economic incentives to create such systems will ever be strong enough (in our industry, at least) to lead to the creation of such a system.

I could be wrong. Already, ChatGPT can create (nonfiction) copy that is nearly indistinguishable from a human writer, and this ability to mimic humans will – probably – only get better. Whether we will ever truly create a so-called “Strong AI” or general intelligence capable of mimicking a human brain is up for debate, and, well, when we create SkyNet, or Lt. Commander Data or C-3PO, I doubt the first application we will apply such an intelligence to will be programming automated lights. If ever we do create a Strong AI⁴ – and that is far from certain – the consequences will be significant, and losing our jobs as programmers might be the least of our worries. To be blunt, by the time AI is good enough to instantly, effortless program an entire good lighting show and run it, start to finish, with minimal to no human input, my friends, we might as well all retire because by then the world will already have been taken over by The Machines⁵. Instead, let’s examine a slightly more likely scenario, a large touring act who wants to save money on an LD position.

Let’s imagine a hypothetical software suite. This suite has been trained on hundreds of programmed shows along with their music tracks – a huge undertaking to be sure, but not outside the realm of some not-too-distant future. This software requires a user to correctly lay out lights in 3D space, in the correct orientation, place elements on the stage in their proper 3D location and orientation, size the rooms and scenic and show elements correctly, and so forth⁶. Once all this is accomplished, let us imagine that our hypothetical software listens to the songs that the band has and then there’s a big red button labeled “Auto-program”. In our example, let’s say the band’s drummer is in charge of feeding an audio line and being “in charge of” the lighting system. The system generates cuelists based on the sound input – but, inevitably, some editing is required. There are places where whatever the software guesses will not be correct, and those will need to be manually corrected. The band might not like the colors that the software chooses for every song, or it might have trouble with a tricky bit of rhythm or picking the instrumentation out of a track. The band might play the track differently night to night, and now someone is tasked with inputting the changes. Timecode events might need tweaking manually, fade and delay times scooted forward or extended, etc. In short, such a system would still need a programmer, because the data management responsibilities of that position are myriad and generative systems like this will – again, for the foreseeable future – require a human to sign off on the programming decisions it comes up with, and such prompting, editing, and approvals themselves still require significant creativity.

In this case, what “generative programming tech” does is affords the opportunity for smaller-budget bands to have a decent light show when they can’t afford a dedicated LD or programmer, or – a slightly less optimistic scenario – gives a larger band the ability to hire a far less-skilled programmer at a lower rate than might otherwise be needed to “program” and data-manage the software. Are the results likely to be as good as a skilled lighting designer sitting behind the desk coming up with the looks? Difficult to see, the future is, but I lean toward “a human will probably do it better”.

Let’s also consider the worst-case scenario. Suppose there existed a software suite so advanced, so far beyond anything we’re considering here, that could effortlessly and perfectly program an entire show with only minimal interaction from a person. What should we expect for this scenario?

Here, I think what we’d see is both a democratization of, and a simultaneous stratification of the lighting design world, with implications for everyone. Economically, you’d end up with the ability for any bloke⁷ who paid the likely exorbitant price for these tools to program awesome lights nearly effortlessly, and the open market would likely respond with an accompanying downward price pressure for lighting designers…if you’re not already established. Simultaneously, I believe we’d also see the really big-name designers still able to charge what they do and make a living, because their name and pedigree represents a premium product that comes with the imprimatur of an “established, human designer” that bands and artists of means will want. Not only as a signifier of their own high status, but they may also genuinely believe in a human je ne sais quoi and ideals of artistic integrity which, for them, permanently excludes in principle AI and similar generative systems from their artistic consideration. In other words, a certain type of artist is always going to turn up their nose at a soulless machine doing an artist’s job, and refuse to work with them.

But this “strong AI” scenario is a tad far-fetched, in my opinion – at least for our lifetimes. Far more likely is that history repeats as it has with previous creative “aids”: AI and generative systems will continue to become more powerful, up to a point, until they plateau and become accepted parts of our toolkits as creative individuals. Examples of this include things like Adobe Photoshop’s Content Aware backgrounds and Neural Filters, Auto-Tune and pitch correction in music production, loops and remixes. Another instructive example is the sort of disruption – and lack thereof – happening now in the music industry.

Automation for generating music is nothing new; David Bowie was using a custom-built program called Verbalizer in the 90s to give him inspiration for creating lyrics, which took literary material and remixed it into new combinations of words that could serve as the source material or starting point for song lyrics. In 2016, Sony used software called Flow Machines to create a musical composition in the style of the Beatles, which was then handed to a professional composer and turned into a fully-realized song called “Daddy’s Car”. In the years since, a host of generative music programs have come into existence. Some of the these are pretty good, and some can even churn out music that – for certain applications – doesn’t require much if any additional processing to be usable. Corporate-esque music for YouTube videos or similar “background” tracks that nobody listens to anyway. That last bit is important – prestige projects like film and television have continued to use real human composers who continue to push the boundaries of their craft. Musical artists have continued to do their musical arting, and there do not appear to be signs of a slowdown. Stock music you download for $3.99 has its place, and so too will generative music, and so will live musicians, and DJs and artists. Will the proportions of who gets what slice of the pie change as the tech improves and the industry adapts? Of course it will; industries as complex and reliant on ever-evolving technology come with above-average susceptibility to disruption, because that tech is not static. To attempt brevity here: in the battle of musicians vs. robots, people are still going to opt to see musicians. The photograph did not obviate the painter, samplers and DAWs did not do away with human musicians, 3D printing has not caused the death of sculpture.

So what to do? I believe generative tech and AI represent an opportunity for designers and programmers, and that opportunity is to grab hold of it and make a part of your toolbox as soon as you can. Embrace generative technology, embrace AI as an aid to creativity, and in the future embrace it as an aid to programming and designing. Automation frees us from the mundane and repetitive, and it should. Today we write macros to save keystrokes and twiddling encoders; the promise of AI and generative tech is to streamline these processes even further. We as an industry should normalize the use of such systems in our workflows. Norms are powerful things; if it’s yet another part of the toolkit that a competent designer or programmer knows how to use to their advantage, knows how to use to save time doing more creative things, perhaps this helps avoid a more destructive norm where the merch guy hooks up a PC to the lighting console every day and walks away, while we lighting designers have gone the way of the ship’s lamp trimmer.

In the meanwhile, I would welcome the creation of AI programs that can do some basic programming for me, freeing me up to be faster and more efficient and force me to be more creative and interesting in my looks. Is such a program coming? Who knows. But if and when it does, our creative selves will adapt, and thrive, just as we always have – because we must.

  1. Admittedly, at abilities that far exceed what we have currently.
  2. Brad Schiller correctly points out in The Automated Lighting Programmer’s Handbook that you should not be totally reliant on these.
  3. For a more in-depth discussion of how these systems work, see Transformer: A Novel Neural Network Architecture for Language Understanding by Jakob Uszkoreit, software engineer at Google, and Attention Is All You Need by Vaswani et al published by Google Research; both are freely available online.
  4. For definitions of strong and weak AI, I recommend IBM’s explanation on the topic, at
  5. There’s a fascinating argument about what art and artistry even mean in a world of push-button artworks, and while I think there’s a good answer that hinges on intent, that discussion is too long to fit here.
  6. “Garbage in, garbage out”, say all the smart computer science peeps
  7. Or the troves of teenagers who will find ways to pirate it in like…a week.

Leave a Reply

Your email address will not be published. Required fields are marked *