VoluMagic: the creation and manipulation of volumetric datasets in Nuke
Foundry are currently involved in a number of exciting R&D projects, harnessing the latest cutting-edge research in computer science to solve challenges facing the VFX and creative industries.
Alongside the SmartROTO initiative we looked at recently, our research team is exploring the potential for the manipulation of volumetric datasets in Nuke, with a view to creating immersive content easily and cost-effectively.
In this article, we’ll take a look at some of the aims of the VoluMagic project, as well as the challenges it’s seeking to investigate.
The VoluMagic research project… what is it?
Artists, designers, and other creators of digital content are well-versed in using compositing applications to create 2D images.
As the demand for immersive experiences increases, so too does the demand for immersive content, which is complex and time consuming to create with the tools currently available.
That’s because the datasets involved are far larger, and you need to use a multitude of complex tools to clean up the dataset and make it useable.
Often, the high cost of creating immersive content means these projects are not viable for creatives.
The VoluMagic project is investigating the potential for content creators to manipulate volumetric datasets in the well known compositing application they already use: Foundry’s Nuke.
The project aims to harness workflows that already exist, and to come up with a solution that doesn’t require any specialist knowledge of additional volumetric data tools.
It’s due to run for approximately 18 months, and will see Foundry work in partnership with Happy Finish - a post company that specialise in CGI production and immersive content experiences.
Solving compositing challenges with multi-view camera set ups
For content creators, working with live-action volumetric datasets currently involves negotiating a number of obstacles.
The cameras that capture the data generally produce over-smoothed, noisy or incomplete results.
Capture and reconstruction systems tend to be proprietary, with artists given the end results without control of the process.
And even when users do have access to the original data, they’re faced with a vast number of images that are difficult to handle on current computers.
As Daniele Bernabei, Lead Research Engineer at Foundry, explains: “The long term goal for this area of exploration, even beyond the VoluMagic project, is solving post-production challenges with multi-view camera set ups.
If you have more than one camera on set (and big productions tend to have a lot), we want to investigate what kind of data can you automatically extract that will make your life easier when you’re working in post later on down the line.”
To facilitate the extraction of this data, VoluMagic will explore ways of reconstructing a scene’s geometry from the information captured by on-set cameras.
And it’s not just photogrammetry style reconstruction that will be explored, Daniele says: “This data might be based on extra information you’re trying to combine together: so maybe lidar scans on the set, maybe data from NCAM, depth sensors.
It’s about bringing all the information together, delivering the artist tools to clean the data up, and then sending it back to the original hero camera”.
Key to all of this, of course, is making the process easy for the artist and time-efficient: “The question is, how much work is needed on the side of the artists to make this data usable?” says Daniele.
“Because of course, it doesn't make sense if you spend hours preparing all this data and it only saves you a tiny amount of time later.
Can we automate a lot of this process: kick up the rendering process during the night, so in the morning you have a nice deep image?”
Making deep compositing the norm, not the exception
The team plans to address these questions by providing a collection of simple modules for constructing custom videogrammetry (3D Data) pipelines, and enabling artists to manipulate the results easily.
Working within the familiar environment of the Nuke compositing system, these modules will be independent nodes, capable of ingesting the wide variety of data that Nuke supports and producing volumes that can be used for compositing within Nuke - or for export to game engines for AR, MR, and VR applications.
The practical uses of this are multiple, but Daniele points to a couple of areas of particular interest: “One of the aims of this project is to see if you can turn standard images into deep images, so that deep compositing becomes the norm, and not the exception”
“We want to give content creators the ability to perform all the operations that you can do with deep data, but on the images that you capture on set.
If the depth is good enough, you can already do depth grading or refocusing, but what's interesting for us is that if you have geometry of sufficient quality, you could potentially recreate and re-render your scene - or parts of it - so you could change the compositing at a deeper level.
So imagine you’re watching the video that you've shot, and there’s an object, say on a table. With traditional compositing, maybe you can remove it and move it laterally.
But what if you could turn it, rotate it? That's what we’re aiming for - compositing, 3D.”
Journeying into the past, under the streets of London
At its heart then, the key innovation of the VoluMagic project is to re-conceptualize the volume reconstruction and editing process as an image editing one.
Since volumetric datasets are mostly constructed from images, and can be used to produce other image types, compositors (2D artists) should be best positioned to handle this data within Nuke.
However, existing videogrammetry tools are designed with 3D artists in mind, which adds considerable time to VFX pipelines. The solution proposed by the team will allow studios to reduce turnaround times and therefore cut costs.
Getting there will be no simple task however, as Daniele explains: “The algorithms are still not mature enough for producing the kind of production-ready quality that you need for reconstructing these things automatically.
They have a lot of problems, especially if you have reflections off very dark surfaces, or water, fog… hair is still a big problem.”
The coming two years will see the team working around these challenges, aiming to figure out how to create artist-driven compositing tools that are effective despite these issues. For now, it’s a waiting game until the technology is mature enough to allow more automatic solutions.
The journey is likely to throw up unexpected hurdles to overcome - and will even take them deep under the streets of London.
“For the creative brief, we’re going to shoot at an old, disused London Underground station, and transform it from the present day to WWII”, Daniele explains.
“Then we’re going to shoot three musicians, and place them in the station to recreate that moment in WWII when musicians were playing in the tube to people sheltering there from air raids.
First we’ll do the location capture, where we’ll grab all the data in every possible format from the environment, and then we’ll capture the musicians, each of them individually.
Finally, we’ll bring it all together into a 3D experience that you can watch with an Oculus or Vive, immersing you in the underground station as if you’re there.”
After the shoot, the rest of the project will involve the team working on the data sets captured.
Daniele notes that while this research is highly exploratory, it could pave the way for new and innovative ways of creating immersive content further down the line: “There’s the potential that in the future, say 5 years from now, this will become really interesting for things like AR experiences. If you want to capture a character and place them in a real environment, you need the things we’re working on in order to do that.
With VoluMagic, you’ll also be able to export the geometry to Nuke.
And we’re trying to tie in traditional technologies that already exist in the software - things like our vector optical flow generation, Smart Vectors, which is at the core of NukeX”.
The VolumMagic project is still in its infancy, with over a year and a half left until the completion.
Daniele and his team are excited about the discoveries to be made and insights to be gained as they get deeper into the work. Watch this space!
Want to hear more about the trends set to shape the VFX, VR and Design industries? Sign up to our monthly Trends newsletter below.