Alfonso Cuarón’s Gravity has created more buzz in the digital cinema production world than any motion picture in recent memory. The (quite literally) breathtaking twelve-minute single take opening shot that begins in outer space with a satellite repair mission gone wrong and ends with Sandra Bullock's astronaut cast terrifyingly into the void is just one of Gravity’s filmmaking achievements that has captured wide attention. And although Cuarón and his longtime cinematographer Emmanuel "Chivo" Lubezki richly deserve the praise they are getting, as anyone involved in digital production knows very well, creating a feature film that many are calling a masterpiece required dozens of people and a massive, complex workflow. One of the people challenged with the task of managing that workflow was James Eggleton, digital lab supervisor of Codex’s sister company Digilab Services, London.
Lubezki suggested some of Gravity’s workflow complexity in an excellent interview in the October 21st issue of New York magazine, which was written by Kyle Buchanan of New York’s entertainment website Vulture. The article describes many of the iconic shots that Lubezki has created with both Cuarón and Terrence Malick, directors who consider him their favorite DP. Describing the opening shot of Gravity he told Buchanan, "I just finished working on this shot a couple weeks ago! It took many, many years." During production, the New York interview continues, Cuarón and Lubezki shot Bullock suspended in a nine-foot cube surrounded with LED lights; they then worked to composite those images of the actress with the outer-space setting during post-production. "It's basically lighting the movie with computers, not unlike lighting a Pixar film," said Lubezki. "I did it from my house while most of the CG gaffers were in London."
I recently interviewed Eggleton via email about Digilab’s involvement on Gravity and how the workflow was handled. He began by telling me that his two main contacts on Gravity were Lubezki and VFX supervisor Tim Webber.
IMDBPro estimates the movies’ budget to be $100 million, which in today’s world is rather modest considering that Gravity is effects intensive and was produced in 3D. Eggleton credits the success to the talents and creativity of the entire production team and also to the fact that they had ample prep time to consider the challenges ahead.
Digital Cinema Report: How early in the process did Digilab get involved? Start to finish, how long was Digilab involved in the production?
James Eggleton: Digilab first became involved in October 2010 during the camera selection tests. We were fortunate that a lot of prep time had been factored in to the schedule, the intention being that we could rehearse every setup before shooting with the actors. One of the most important phases of prep was deciding on which variables to fix and which to vary. With almost infinite combinations of camera and lighting settings available to us, it was important not to get lost in what Chivo refers to as "digital soup".
Each time we encountered a new light source (Lightbox LEDs, Breise lamps, concert-style Vari-Lites) we created a camera setting that neutralized the color bias of the light source. This gave us a consistent base from which to apply the creative looks that had been designed with Chivo.
We captured the highest quality images from each camera (ideally uncompressed with minimal in-camera processing) and presented processed files to Framestore for comparison. The objective was to select a sensitive camera that behaved well under a variety of different lighting sources. Following the initial tests, the Arri Alexa was the favored option due to its sensitivity in low light; for much of shoot the Alexa was required to capture clean images in an LED-illuminated lightbox, which had less power than traditional lighting. We turned over the last camera-original images in December 2012.
DCR: Describe the screening room you developed for Gravity.
JE: During the prep period the screening theatre was equipped with a Baselight system, which was used to review and develop the virtual film stock that was used throughout production. All camera original data was kept live for the duration of the shoot to allow for impromptu screening sessions on a 2K projector. The majority of footage was delivered to editorial same-day, as Avid DNxHD MXF files. The fast turnaround was essential for keeping the cut in sync with the shoot, so that the key creatives could monitor the progress of the long sequences as they were shot piece-by-piece.
The T-Link output of the Alexa (containing the ArriRaw image data) was recorded to a combination of Codex Onboard M and Codex Recorder systems. Our technicians monitored and controlled the recording devices using the Codex remote user interface software. All of the technical departments were based on an elevated platform next to the lightbox: VFX, Robotics, Camera, Video, Lighting and Sound. Much of the electronic hardware was housed elsewhere on the stage, so the remote control interface was invaluable.
The monitoring output of the Alexa was fed into a Truelight OnSet box, which applied a pre-designed creative LUT for the current setup, and the calibration for the display device. We incorporated a balance/contrast stage into the color pipeline to allow Chivo to perfect the look on set. We used calibrated HP Dreamcolor monitors to support critical monitoring of 10-bit HD images. The color pipeline was exactly replicated for dailies material delivered to Editorial.
We lay semi-permanent cabling in the stages so that we could move between them with ease. The cabling paths were fairly intricate; running through robot arms, in and around the lightbox structure. We used almost 2km of 3G-rated coaxial cable over the course of the project, which is certainly above average.
The Lab office was set up in and around the old Preview Theatre at Shepperton Studios. The projectionist's offices were converted into a machine room and workspace. A Codex Lab system was used to collate footage and archive to LTO-4 tape via a Quantum tape robot. We used an 80TB RAID-6 protected disk array to store all production footage. During Prep this served as storage for our Baselight system, and during production it served data for screenings.
DCR: How many different locations were there for the shoot?
JE: The bulk of production was spent on sound stages at Shepperton Studios. Typically there were two units at work, one shooting and the other prepping for the upcoming setups.
DCR: One of the key challenges in production and post today is managing the workflow throughout the entire process. This is amplified in any CGI-heavy film and especially in a 3D production. How big were some of the files you had to manage?
JE: Digilab was responsible for all camera original data. The primary shooting format was 2880x1620 (16:9) ArriRaw at 24fps, which is roughly 7MB per frame.
The MOVA facial capture sessions used five synchronized Panavision Genesis cameras, all recording to Codex Portable recorders. Select footage was delivered as DPX images to Framestore for motion analysis. The combined data rate of all five DPX sequences approached 1GB per second.
The footage breakdown (all cameras) was:
10 hours prep
70 hours main
11 hours additional
=> 91 hours total
On average we captured and processed 65 minutes of material per shooting day. By happy coincidence the LTO-4 tapes that we used for data archive could store 75 minutes of ArriRaw material, so on a typical shoot day we would write and verify two (mirrored) LTO-4 archive tapes. One copy was kept by production, the other sent to deep storage at Warner Bros.
DCR: How many separate people or teams were handling the files?
JE: We had a Codex technician for each shooting unit, responsible for the capture of camera original data. They were responsible for naming all assets in line with Framestore's naming conventions, and monitoring the live output of the camera. We were responsible for all critical monitoring on set, ensuring that Chivo, Alfonso, and Tim were able to see images as they would appear in the screening theatre. The screening theatre was staffed by a QC technician who viewed every captured frame, wrote detailed Lab reports, and conducted screening sessions.
During post-production, the Digilab Soho office served as the hub for all camera original data. The Editorial department would send data pull requests, which contained all the information required for us to extract, render, rename, renumber, and deliver full resolution DPX image files to Framestore. We also embedded the camera settings and LUT choice into the DPX image headers so that the VFX artists could view the information within Nuke.
DCR: What, if any, new tools did you need to develop for the production?
JE: At the start of camera testing there was no official software toolkit (SDK) for the ArriRaw format. We analyzed the available debayer algorithms, and settled on Codex's HQ algorithm to convert ArriRaw files to RGB DPX images; it retained the most detail from the camera original and had the cleanest edge handling. The visual effects work ran concurrent to the shoot, so we had to set the processing parameters before we started shooting, and continue to support that same pipeline for the next 26 months.
By the time of principal photography in March 2011 it was possible to capture ArriRaw at up to 30fps, and HD/ProRes at higher rates. Our working color space was Arri Wide Gamut/Arri LogC. We were able to be consistent in our processing of ArriRaw for the entire production and post period because no color decisions were baked in at capture, one of the clear advantages of ArriRaw capture over ProRes.
Digilab developed its own conforming system that could automatically extract requested ArriRaw, DPX and ProRes images from LTO or disk sources. This is a good example of the kind of background challenges that have to be met when agreeing to use a new file format that commercial software doesn't yet support.
The people at Digilab have wide and varied career backgrounds – camera department, VFX, post production, software development – that proves invaluable on productions like Gravity where production and post run in parallel. Due to the unique ways in which cameras and lighting rigs were being used there were many challenges to overcome.
The LED lighting used in the lightbox environment is typically used in broadcast scenario, where all cameras have genlock (synchronizing) capability. The Arri Alexa camera did not have a standard genlock input, so we had to devise a way to synchronize the LEDs to the camera shutter rather than the other way round. We also worked with Arri and Framestore to ensure that the VFX witness cameras could sync to the camera shutter.
By default most US productions capture images at 23.976 Hz, using the NTSC 1000/1001 timebase. We used the 1000/1000 timebase, giving Chivo the ability to shoot in the lightbox at a variety of rates: 24/25/30/48/50/60. All Avid MXFs were delivered at 23.976 fps so the Studio did not have to modify their standard workflow. This involved close cooperation with the Sound department to ensure sound and picture sync.