La Ruta

Building a Remote Production Desk with TouchDesigner

In Isaac Gómez's La Ruta, we are told the heartbreaking story of a border town, a bus route and the women of Ciudad Juárez. Inspired by real testimonies from women affected by the ongoing femicides along the border, La Ruta weaves together beautiful storytelling and music in a celebration of the resilience of Mexican women in the wake of tremendous loss. Read more about this stage play and its production here.

I was brought into the production of La Ruta after the pivot to a remote production was decided. The media designer’s role in this play, which previously was traditional media creation and stage projection, quickly grew into a massive, tangled up job of systems engineering, media design, direction, and audio engineering. John Erickson, the media designer, brought me in as his assistant. In this role, I took over solving the technical aspects of the performance, and early on we decided that our remote queuing/compositing system would primarily live in TouchDesigner.

Our baseline goal for La Ruta was a live-streamed table read. You can imagine this as a Zoom stream of the actors rehearsing their parts from their individual homes, with less emphasis on a polished performance. As the production began to develop, the idea evolved into a Zoom/Skype call between all the actors with customizable window placements, dynamic media backgrounds, and programmable cues. From this point, we drew up a list of technical challenges: working with remote performers in multiple locations, compositing and arranging their streams in real time, having a familiar cueing system for John and our operator, and individual audio routing for all the actors. Through the planning process, John’s design choices influenced our technical needs and my technical research influenced the end design.

Our technical objective, as it became more focused, was to be able to ingest and manipulate nine different live feeds from actors in 2D space and to recall each unique layout based on cues sent from a seperate computer running QLab. We also wanted the actor feeds to have soft, blurred edges, rather than sharp rectangular windows. Finally, the media designer (John) wanted to be able to composite the actor feeds on top of content sent from QLab , and have that final video and audio be streamed to Vimeo.

Here’s a sneak peak of what we would end up designing, with this in mind:

So, with the goal outlined, let’s take a look at the final system diagram:

This is a little dense, so here’s the software system flow. The actor feeds are individually sent to the main PC via OBS.Ninja.

The OBS.Ninja browser sources are arranged in a 3×3 grid like so.

OBS also receives the audio from the audio desk. A soundboard operator is controlling OBS with an Akai APC MKII.

Video and audio is routed to TouchDesigner on the same machine via NDI.

The entire TouchDesigner network looks like this:

Again, this is a bit dense, so I’ll break it down by section. On the left we have all of the controls that have to do with the actor input feeds. The actor feeds come in through the NDIin TOP from OBS. We found it useful to have a test feed to run tech in case the actors were not on camera. The cache is there to sync audio and video. The video player pulled pre-recorded cuts of the actors out of a folder for certain parts of the performance. The switching between these was done programmatically based on cue.

The actor feed is then sent to the two cue decks (Deck A and Deck B) which are programmatically loaded and called based on cue. These bases are clones of base_fx_Storage.

Let’s step into base_fx_Storage. In here, the composite feed is cropped 3×3 and each feed is sent to an effects base. The feeds are then composited back together after effects are applied.

In each effects base, we can control each actor feed’s crop, soft edges, monochrome, level effects, size, and placement on a 16:9 screen.

So how are these settings stored and recalled? This system takes advantage of the internal storage of a TouchDesigner component. With a bit of Python, we are able to store and recall all of the custom parameters of each actor feed. This part of the project was directly sourced from two of Matthew Ragan’s wonderful tutorials: Case Study | Custom Parameters and Cues and Presets and cue building.

Now for the actual cueing system. The QLab Macbook controls all our cueing. On each cue, QLab sends an OSC string over the network to the TouchDesigner computer. This is an example of a sample OSC string. The media designer and I decided on a uniform cue format: "/Q/" + cue number + "/" + transition time.

In TouchDesigner, we would interpret this string with the code below. This code splits up the message into the relevant parts. A message such as /Q/10/2 is interpreted like this: msg_source is the source (“Q”), preset_name is the cue number (“7”), and trans_time is the transition time that the media designer would designate, in seconds (“2”).

Next, we want to load the next cue and change decks based on which deck we’re currently in.

In load_preset, we fetch the stored preset from the storage base’s internal memory, and load each preset into the target deck.

Once the new deck has been loaded and crossfaded appropriately, we can composite the media video feed from the QLab computer onto this actor feed setup. We have two feeds below: a live feed (comp2) and a tech feed (comp3).

The media designer and I could adjust and compare cues and not worry about disrupting the final feed during tech rehearsals. Below, you can see the screen I had set up to see the final feed, the tech/storage feed, audio signal, a list to load cues into the storage feed, and a settings panel for adjusting the individual actor feeds.

A few custom cues had videos that played on top of the actor feeds, so another video player was added. The final products are sent to a director/stage manager monitor and a tech monitor, as well as the streaming computer via NDI.

The director’s feed was set up as a TV a couple yards away from the TouchDesigner computer. Because of the latency added by each step of the process, our stage manager needed to call tech cues based on the final product, but also needed to call staging cues based on the actor zoom, a difference of about 5 to 15 seconds, depending on the internet that day. Below, the stage manager Skyler is viewing the final feed on the left and the OBS screen on the right.

Finally, on the streaming computer, we had an OBS window that accepted the TouchDesigner feed via NDI and streamed that feed to Vimeo.

And, voila! We had a fully remote show.

Check out the TouchDesigner file here.