Image courtesy of Imaginarium Studios

Making the impossible possible with virtual production at Imaginarium Studios

Imaginarium Studios is a pioneering, London-based company at the forefront of digital performance-capture.

Originally founded by actor-director Andy Serkis in 2011,  The Imaginarium Studios is dedicated to the invention of believable, emotionally engaging digital characters using performance capture technology across film, television and video games. 

Foundry Trends caught up with CEO Matthew Brown about realtime and virtual production techniques, and where the technology is headed for the future. 

FT:  What does your virtual production workflow look like?

MB: Our virtual production toolset allows performers, actors, and directors to see as close to the final product they’re performing as possible.

We go from Shogun - when we’re using our VICON setup - into Motionbuilder, straight into a custom version of Unreal. 

With our Vcam, we can closely mimic the same look and feel you get with an optical camera. We’re able to give DOPs and directors a lot of the tools they would get in traditional filmmaking: they can do all the crazy camera moves you’d have to use a crane for, virtually, here on set.

As a production tool or even as a pre-viz tool, this allows them to put whatever cameras they need in a scene, either with the actors there or when they’re gone. Then we can go back and re-run the motion data, and give them a pass on it.

fight scene in motion detector

 

FT: Can you tell me more about the set up?

MB: We have two stages: our main stage and a smaller stage, which is used as a Range of Motion (ROM) stage primarily, but we’ve fed a lot of other toys into it and even livestream playing VR games on Twitch every week.

We can capture on our main stage, then have a director wearing a VR headset in the next room, seeing a virtual reality, real-time output of what’s happening on the mo-cap stage. 

So you could have a real sword fight with someone who’s in a mo-cap suit and you’re in a VR headset. It’s fun! Whenever we do a demo of it, we get the person in the headset, and somebody comes in wearing the suit, and they're thinking they’re seeing the person being streamed, but then they put their sword up and actually hit swords. So those are really cool tools that allow you to visualize what you’re trying to create in real-time. 

And our Lead TA George has created a whole bunch of HUDs in Unreal that allow you to load everything into a scene -  all the props, the environments, the different characters - then you just use drop down menus to add characters, add props, all live, with the click of a mouse. 

You can have one actor toggle between lots of different characters, play with lots of different props, and shift environments really quickly. 

FT: Are you seeing more productions being made this way?

MB: It’s still fairly new to a lot of directors, but there are a few people who are really trying to push it. People like James Cameron have been pushing it for years, and Spielberg with Ready Player One, and of course my business partner Andy Serkis on projects like Mowgli which is out next year on Netflix.  As far as all the various systems and approaches, there are loads of different combinations and ways to assemble the tech.

For us, we can take a minimal mo-cap system onto a live action set, and then do some real-time comping using Engine. Either you come onto our mo-cap stage and you can see the digital assets real-time in an environment, or you can go onto a live action set, where you can see the digital asset in the real environment. 

imaginarium team

 

FT: Tell me a bit about the work you did on The Tempest

(A collaboration with the Royal Shakespeare Company, where Imaginarium Studios worked with the actors and creative director to come up with a feasible way for actor Mark Quartley to turn into all the manifestations of Ariel, which involved wearing an Xsens suit)

MB: For The Tempest, the challenge was, how do we create a workflow that’s going to allow the RSC stage crew to run this show day in, day out. So what we did with The Tempest was hand over an executable, with all the manifestations of the Ariel character, and then the team on stage run that. We created a plugin so they could control a lot of the avatar attributes from the lighting desk. 

So we went to a native tool that the team there were very comfortable with, and we made Engine work with that, instead of trying getting them to understand Engine.

FT: How difficult was that from a technical standpoint?

MB: From a mo-cap point of view, we had to integrate an Inertial suit into a live performance. And that came with a whole set of challenges. Inertial suits are great for live driving a character, but there are problems with global drift and knowing its position, because it just tracks the sensors in relation to one another: it doesn't know where in the world it is. Then there was interference from the sets and the space, because it’s all streamed on the Wi-Fi.

It was about creating a reliable workflow, so you could stream the motion data in,  apply it to an avatar, and have it piped out through 27 different projectors, then have it reliable enough to fit in that live set up. And it worked! It did a run at Stratford and a run at the Barbican, and it was solid.

imaginarium office

 

FT: Are there any projects you’re particularly proud of? 

MB: The Tempest was a great one, I’m really pleased with that. We’ve also had two BBC programmes that have just run recently. There’s a programme about Neanderthals, and another programme called The Ruins of Empires… that was quite cool because we created a lot of mo-cap driven assets that were projected onto a screen, and [British rap artist] Akala interacted with them. 

So it was a combination of traditional VFX, animation, mo-cap driven animation and a live performance. And it was all shot on our stage - even the live action elements.

FT: How has the range of production techniques and tools available to you changed over the past few years? 

MB: The improvements VICON have made to their optical systems are about speed of ROMs and features within the software. Unreal has got a lot more interesting with things like Sequencer, which allows us to work in what feels like a much more traditional editing way. 

But we still always have to re-create plugins and things to make all these systems talk to each other. We’re constantly seeing what the new feature set is, then writing specific code to make it fit our workflow or be more efficient.  Over the past few days, the guys have been working on new Vcam work, so you can get realistic, really nice and responsive controls into the Vcam, like focus tracking that will record straight into engine.

There’s this two way street when it comes to companies like Epic - they’re trying to address new markets - broader markets for their Engine - and that’s throwing off opportunities for us  - who aren't core to what Epic’s trying to do, but can use some functionality in different ways on our stage.

fight scene in 3d mapping

 

FT: What advances would you like to see in mo-cap?

MB: Something that would be interesting to explore is more integration of different technologies. We’ve done some informal testing, blending Inertial suits with optical, and I’d like to play around with how you might do that on a live action set -  a minimal  mo-cap system, working with Inertial - so you get the best of both.

And then with Engine, it’s just that continuation of a broader feature set.

FT: What excites you most about your work?

MB: Getting into new platforms and trying to come up with new combinations that have never been attempted before. I love it when people come to us and say ‘We’re not even really sure it’s possible…’ and then we try and come up with an answer to make it possible. 

That’s the stuff that makes it most interesting. It’s when we get really good new partners who are trying to break new ground, and we can apply our knowledge of say game engines or how you do things like digital alignment in a VR setting.

A lot of people coming from a VR background, thinking about how they’re going to create content for that kind of experience, don’t necessarily think about how you translate the real world to the virtual world and back again. 

That’s something we do every day on our stage. So it’s thinking, ‘if there’s going to be a tree over there, how are we going to represent that on our stage, so that all camera angles and the lighting is going to be right?’ And that’s a natural process for us. It’s always fun to take things that we’ve learnt from mo-cap to new platforms.