Peeling back the layers of the virtual production onion

Virtual production (VP) could reinvent the visual effects (VFX) business in the coming years. In short, virtual production allows shots to be rendered in real-time on-set, meaning directors can change things like camera angles and lighting without needing to fully re-render the entire scene.

The implications are significant, especially the potential cost savings made through speeding up processes that until now have taken days to complete. But, right now, VP is still very much in the developmental stages.

The current state of play in virtual production is that each part of the pipeline remains a distinct, disparate process. Generally speaking, previs, techvis and postvis still happen independently, with no real continuity between them. The process is sequential, data hand-off is challenging, and the software used is distinct in each case.

Virtual production workflows have potential, but without something tying everything together, the process will struggle to gain any real traction.

What if you could create a continuous pipeline, as such that the decisions made at each stage are inherited and iterated upon, rather than started afresh?

Connecting the dots

In the near future, virtual production will have a single, unified backbone—going all the way from concept through to delivery, building on what has gone before.

VP will have a key role to play in specific parts of the workflow -  but it’s important to consider that it may not be right for every stage of every project. Production should be built in layers, with each component part improving on what went before.

Think of it like an onion, with many layers working in harmony to deliver the finished product.

Some of these components should be real-time, with virtual production at their heart—in particular on-set. But these real-time components should be built on and replaced by software at a higher layer of the onion where appropriate  - when you’re planning to do more comprehensive work in post production on that scene, for example - in order to provide the level of quality needed for the final output (something real-time technology cannot currently compete with).

image

Consistency from previs to post

An early example of this theory in action is Unity and Oats Studios’ award-winning series of short films, Adam. While this was on a relatively small scale, it shows game engine technology’s potential role in all this.

Foundry’s own research and development project, DREAMSPACE, looked at the same idea with a number of prominent industry partners—including ncam and Intel’s Visual Computing Institute.

We knew that most post houses don’t get a look in until shooting has already finished and a lot of the critical decisions have been made. In fact, the way projects are handed off to post-production is a big point of contention in the industry.

With DREAMSPACE our hypothesis was that, if you could use the same software from previs all the way through to post – including on-set visualisation – the subsequent consistency of data and decision making would significantly improve the efficiency of the entire project.

Virtual production will be vital in meeting this hypothesis, but – crucially – it must fit together with the other layers of the VFX onion.