EDM News

Visualising Voices: Connecting TouchDesigner With Ableton Live

Explore how to bridge the gap between sound and sight using the powerful connection of Ableton Live and TouchDesigner. From visual processing to MIDI mapping, we break down a custom performance setup where harmonised vocals drive immersive, responsive visuals.

Immersive and responsive visuals can add a real sense of intrigue and depth to a live performance. There are a couple of solutions you could explore, but few integrate with Ableton as well as TouchDesigner.

TouchDesigner is a visual, node-based development platform. Think of it a little like Max, where each node has a specific function, and data flows through connected nodes. Unlike Max, though, it handles a vast array of different data types, and you’ll most often find yourself using it to create something visual. You can download TouchDesigner right now, absolutely free, though you’ll be limited to a maximum output resolution of 1280x1280. Need more? You’ll need a license.

Let’s step through the entire process using one particular performance as our guide. We will first clarify the Ableton setup, then examine the TouchDesigner network, and finally illustrate how the two connect and interact in a live context.

In this performance, the visuals are partly controlled by the audio, while the audio is also shaped in real time by the visuals. In this tutorial, we’ll break down exactly how the entire performance was put together.

Ableton

This piece in the video above features up to seven voices: the live vocal and six harmonised voices, each managed by a separate instance of Ableton’s Auto Shift. Though Auto Shift supports polyphony, using individual instances allows precise control over each voice.

MIDI from the live controller enters via a master MIDI track, is slightly transposed, the velocity flattened, and then routed to a custom Max for Live device, Voice Allocate. Voice Allocate splits all the notes played, assigns each a unique ID, and sends each note to a Voice Receive (a further custom Max for Live device designed to receive just one of those IDs). 

Since Auto Shift is an audio effect, each Auto Shift is grouped in a pair with a MIDI track, each MIDI track receiving an input from a single instance of Voice Receive. Finally, Auto Shift is set to monophonic and MIDI mode, receiving from its counterpart. We land on each Auto Shift, receiving a single note split from one controller’s input.

Audio from the live vocal is routed to a master audio track and then to all instances of Auto Shift.

The MIDI input channel is selected here. Pitch handles transposition, Velocity flattens the dynamics of all incoming notes, Voice Allocate splits the notes across voices, and the TDA MIDI device handles the TouchDesigner connection (explained shortly).

What we end up with is a live-played vocal harmoniser – up to six harmony voices plus the live vocal, each sitting in its own distinct channel.

TDAbleton

TouchDesigner primarily uses OSC to communicate with Ableton, but its own script and Max for Live devices make the setup easy to get running.

We’re going to use TDAbleton – the bundled collection of Max for Live devices and accompanying MIDI controller script. Once it’s installed, we can both receive data from any element of Ableton and send data back to it, all from within TouchDesigner.

TouchDesigner

To keep things economical on the CPU, this piece utilises a relatively simple instancing network in TouchDesigner.

Instancing lets us create multiple versions of a shape from a single source, with each version rendered by the GPU rather than the CPU (a significant saving as the network grows in complexity).

It begins with a single Sphere SOP (Surface OPerator), transformed a little, then fed into a Geo Comp where the instances are created. A collection of Noise CHOPs (CHannel OPerator) generates a random, moving XYZ position for each instance, capped within the bounds of the screen. 

A Ramp TOP (Texture OPerator) gives each instance a unique colour, interpolating between two end colours. We give the Geo Comp a constant material which renders the spheres without any depth perception, resulting in flat, circular shapes. 

From there, the output is picked up by a Camera Comp, illuminated by a Light Comp, and captured by a Render TOP. We then pass the signal through a couple of visual effects – adding bloom and blur – before it hits the output and the screen.

Blue nodes are SOPs that flow into the Geo Comp and out to the purple TOP nodes. The yellow node containing a constant material keeps the spheres rendering flat. The green CHOPs at the bottom begin the audio processing chain.

The result is a collection of up to seven glowing circles moving through 3D space.

Connections

Connecting an element in TouchDesigner means adding both a TouchDesigner node and a Max for Live device to Ableton. TDAbleton does this for you in one click, meaning that for all the connections below, a device is added to the appropriate Ableton track too.

An abletonLevel CHOP picks up the audio level from the Ableton master channel, and we map it to two things: the opacity of the Ramp TOP, so more volume means a more solid colour; and the speed of the Noise CHOP driving instance position, so more volume means faster movement.

An abletonMIDI CHOP receives all notes from the master MIDI input track. We take the total number of active MIDI notes plus one – that extra one accounts for the live vocal – and send it to two places. Firstly, to the collection of position-making Noise CHOPs, determining how many sets of positions are needed. This, in turn, feeds the Geo Comp, informing the number of instances. Secondly, to the Ramp TOP setting the number of colour shades between the two endpoints.

The X values (left / right) from the Noise CHOP positioning each instance are mapped to the pan of seven abletonTrack COMPs – one for each voice.

Finally, a Count CHOP looks for an F3, incrementing each time it’s struck. Once it hits a threshold, it fires a trigger, sending the audio out of Ableton to a Hologram Electronics Chroma Console set to Inference, a glitchy, fragmented effect. A further abletonLevel CHOP then picks up the return level of that effect and maps it to the intensity of a colour shift, so the visual breaks apart in time with the audio.

The TDAbleton network at work. Seven pan controls, Master level, MIDI in, MIDI triggers and the Chroma Console return level – each feeding through further nodes to process and route the data.

The Result

A live MIDI input is separated into individual notes, and a live vocal is harmonised by those notes. Each voice sits in its own channel, represented by a moving circle on screen. Those circles intensify in colour and speed as the harmonised vocal grows in volume. And each circle’s position in space pans its corresponding voice in Ableton.

The result is a genuine marriage of audio and visual – where volume and notes are seen, and where visual position is heard.

Don’t have Ableton Live? Buy it here. Heads up, it’s an affiliate link. If you buy through them, we may earn a small commission (it doesn’t affect the price).

[social-links heading="Follow Attack Magazine" facebook="https://www.facebook.com/attackmag" twitter="https://twitter.com/attackmag1" instagram="https://www.instagram.com/attackmag/" youtube="https://www.youtube.com/user/attackmag" soundcloud="https://soundcloud.com/attackmag" tiktok="https://www.tiktok.com/@attackmagazine"] [product-collection]