Macbeth Demo

This video demo combines visuals created in Unreal engine with a 7OA mix of a performance of Act I Scene I from Macbeth.

Recording Process

Recording Day 1: Motion Tracking

Three performers acted inside a motion capture volume; their audio was recorded using DPA microphones and their tracking information recorded to a separate PC. In order to synchronise this data, performers clapped at the same time following a countdown.


Performers acting inside the motion capture volume.

Recording Day 2: Face Tracking & Audio

The performers’ face-tracking and voices were recorded through Automated Dialogue Replacement (ADR). In order to aid their performance, a basic form of the visuals from the previous day were shown on screen (see video). The performers clapped at the same time as they had on the previous day, also blinking at this time. This allowed for synchronisation across the motion tracking, audio and face-tracking.


The ADR and face-tracking process (Note: The above video was for demonstration purposes only, so the performer did not clap).

Audio Workflow

Recording

ADR audio using an AKG C414, shock mount and pop filter in a recording studio. This replaced the DPA audio as it was higher quality and would match up best with the face animations.

Mixing

Mixed using a 7OA workflow in Reaper. Key processing applied is as follows:

Pre-processing: iZotope RX9

Spectral de-noise used to remove noise, mouth de-click and breath control used to clean up.

Encoding: 7th Order Encoding

Plug-ins used: IEM MultiEncoder and AmbiX o7 Encoder

Delays: Valhalla Supermassive

Used to enhance the weather sounds.

Reverbs: IEM FdnReverb

Three reverb sends used with varying reverb times: 1.8 s, 2.9 s and 5.5 s

Audio Scene

The whole audio scene was spaitalised from a single audience point and orientation; this does NOT match up with the visuals (which switch between different camera positions and angles). The contents of the audio scene were informed by the visual elements in the video.

Deliverable Formats

The audio was provided in the following formats:

The BBC’s EAR Production Suite was used to create an EBU ADM file. The EBU ADM file contained 2 HOA files: one for the reverb and the other for the weather; the rest of the sounds were (mono) objects. The automation data was copied from the Ambisonic encoders to the EAR object plug-ins. The stems were also provided separately.

The audio was converted to binaural using decoding filters developed by Tomasz Rudzki. The audio was converted to 7.1.4 using the SPARTA decoder (no audio was rendered to the LFE channel); this used the Dolby Atmos routing (SSL and SSR were sent to outputs 5 & 6).

Visual Workflow

The visuals were constructed using the following:

Motion-Tracking

Motion tracking was captured using a combination of Vicon and XSens systems due to a limited number of suits. Since the XSens suits do not track absolute position, this performer had to be inserted into the scene in post-production.

Face-Tracking

Face-tracking was recorded using the LiveLink Face for Unreal application for Apple devices (with True Depth camera). Following the recording, the ‘Metahuman Animator’ feature was added to the app; this allowed for much higher resolution face-tracking.

The traditional ‘ARKit Method’ yielded poor results, so certain phrases were re-recorded at a later date. Due to the original performers being unavailable, a different performer was used to re-record the selected phrases.

Project Assembly

A virtual scene and tracking data were combined in Unreal Engine 5; all tracking data was mapped to metahumans and they were placed within the virtual scene.

Unreal’s ‘Sequencer’ was used to put the project together. Cameras were placed inside the virtual space to capture the video.

Download Materials

Type Title Audio Format Filetype
Audio Only Macbeth_Binaural Binaural
Decoded from 7OA
WAV
Macbeth_7OA 7OA
ACN/SN3D
WAV
Macbeth_7_1_4 7.1.4
Decoded from 7OA
WAV
Macbeth_EBU_ADM EBU ADM WAV
Audio & Visuals Macbeth_Visuals_
4k_Binaural
Binaural
Decoded from 7OA
WAV

Credits

The AudioLab would like to thank Google for their collaboration with, and funding of this project.

Recording, Production & Mixing

Jacob Cooper

Motion-Capture Performance

Chesca Downes - Witch 1

Francesca Di Giornio & Stephanie Ornithari Roberts - Witch 2

Renaya Dhillion - Witch 3

Voice Performance

Chesca Downes - Witch 1

Francesca Di Giornio - Witch 2

Renaya Dhillion - Witch 3

Visuals

Joe Rees-Jones