A deep dive into Virtual Production workflows

What if you could translate your ideas into images, as fast as you have them?

For filmmakers interested in Virtual Production, this is the holy grail: the ability to work in real time, iterate in photorealistic environments, and do anything you want in the moment.

Tools that can match the speed of thought would allow almost limitless experimentation and unfettered creativity.

While we’ve not reached this nirvana yet, there are plenty of great minds and innovative companies working to ensure we’re headed in the right direction.

Directors like Andy Serkis, Wes Ball and of course, James Cameron have famously adopted Virtual Production in their films with remarkable success, and companies including Imaginarium Studios in London and The Third Floor in LA have been set up to facilitate content creation using these new techniques, pushing the boundaries of what’s possible with them.

So what does a Virtual Production workflow actually look like? Let’s take a look.

Preliminary stage

Pitchvis

Pitchvis is used to visualize a sequence or idea from a script. This is then turned into a trailer, which is used to pitch to investors and production companies. Pitchvis is a useful tool for easily communicating your vision to help get your project funded or greenlit. 

Here’s Halon Entertainment’s reel, which includes pitchvis trailers for 300: Rise of Empire, World War Z  and Crash Site:

Previs

As we’ve touched on before, previs involves previsualizing part or all of a film. Often used for particularly complex scenes, it uses storyboarding, animatics and asset building to create a three dimensional visualization of the story’s world, enabling the director to explore different shots and angles. 

Previs technology is becoming so good, the sequences created in it may eventually be useable in the final cut, as Wes Ball explained in a panel about Virtual Production at SXSW earlier this year: “Previs tools are now being used inside of a game engine, leveraging the power of what we can do. We’re so close to photoreal already, and soon we’ll be at a place where game engines can be presented as final image.”
 

Production stage

Virtual Production is a catch-all term for a broad range of different technologies and processes which facilitate the creation of virtual characters and environments, and enable directors to physically shoot them on camera. 

It’s a discipline that’s rapidly evolving, and new techniques are being developed all the time. Here’s a look at some of them.
 

Motion capture

Also known as performance capture, motion capture is instantly recognisable: it involves actors in bodysuits covered in tiny dots.

Used to digitally record human performances, those dots track the movement of the actor, sending data back to software which can then be used to create a digital 3D character. In this way, Andy Serkis is transformed into Gollum or King Kong.

Motion capture allows directors to work one-on-one with an actor, coaching and capturing the authentic performance.

This short video shows how mo-cap was used to bring Gollum to life in 2012’s The Hobbit: An Unexpected Journey:

 

Virtual Cameras

Virtual cameras enable directors to experiment with different angles and shots in a virtual environment.

The director is given a physical camera that is hooked up with sensors which map into the virtual world. When the director pans, dollies, tilts or cranes this camera, the movement is recorded and can be played back.

Virtual cameras are used to explore virtual locations, work out the best angles for shots and re-shoot pre-vis sequences from different positions.

Here’s an example of a virtual camera being used: 

 

Techvis 

Techvis follows previs. After the previs sequences have been created, a technical analysis can be performed to tell the team how the film can be physically shot in real life. 

All the technical information that has been captured in the previs stage - lens type, height of a camera, size of a green screen, speed of a vehicle - is displayed in the forms of diagrams and visualizations, with real-world measurements. 

Motion Control

After the technical data has been harvested in the techviz process, it’s then sent to a physical camera where the pre-planned shots can be executed to the exact specifications required.  In effect, this is the transferring of the digital camera’s movement to a real camera, which can then reproduce the shot in real-life.

This allows the director to control the camera’s movements precisely, and also to repeat them. The ability to repeat the camera movements over again exactly is useful for creating visual effects: you can film several elements using the same camera motion, then composite the different pieces together. 

Simulcam

Originally invented for James Cameron’s Avatar, Simulcam is a fusion of physical and virtual cameras. It superimposes real actors wearing mo-cap bodysuits onto a virtual set in real-time, allowing you to use motion tracking results as they’re captured. 

This means the CGI scenes in a film can be directed like a normal live-action scene. Here’s a video of it in action:

 

Virtual Rapid Prototyping (VRP)

Virtual Rapid Prototyping in filmmaking is a step on from the previs process. Harnessing virtual production techniques, it enables a small crew to plan, shoot and edit sequences in real-time using actors in mo-cap suits. 

This allows sequences to be quickly sketched out: an entire film can be easily prevised to create a feature-length pitch-vis using VRP, which can then be used to test for marketability. 

Virtual Asset Department (VAD)

Many blockbuster films now take place in realistic virtual environments rather than real-life locations:  Jon Favreau's The Jungle Book didn’t have a single frame actually shot in a jungle, for example. 

This requires the work of a virtual art department, which under the direction of the production designer creates the CG assets and virtual world for directors to explore with the virtual camera. 

If the virtual environment is based on a real location, the virtual production team can physically scout the location, measure up, and then model a digital version precisely to scale. 

Photogrammetry

If you want to capture something that can’t be easily computer-generated, you can use photogrammetry. In essence, this involves taking information from photographs to recreate them digitally. 

Incredibly useful if you have access to a complex set or prop that you want to generate a model from, photogrammetry can save a lot of time. It also creates a more accurate experience for those operating a virtual camera. 

Post production stage

Postvis

Postvis is about bringing together all the elements for the final version. The editors will take the live action, pre vis, and virtual production footage, preview and splice them together, and the end result goes in the finished film. 

Here's a side-by-side look at some of the previs and postvis for The Rise of the Planet of the Apes:

from Faris Hermiz on Vimeo