Virtual worlds and adaptive light-fields: an interview with Disney Research

Virtual reality (VR) storytelling holds huge potential, especially for the media and entertainment industries. But it’s not without its hurdles, as we have explored across a number of articles in recent months.

Foundry Trends spoke to Kenny Mitchell, a senior research scientist for Disney Research, about how the industry can overcome some of the key challenges, and the exciting work his team is doing in this field.

Foundry Trends (FT): What are the biggest technical challenges when it comes to creating stories in VR?

Kenny Mitchell (KM): “VR story creation is like putting both regular movies and video games together in a way that tries to surround and immerse you totally. There’s an expectation to be free to explore, interact with and meaningfully impact things in virtual worlds, so the first challenge is how to visually present everything moving seamlessly in front of your eyes—without being fixed to a single 360-degree viewpoint. 

“While fully detailed 360-degree video needs about 10-times the resolution of regular displays, consider adding the ability to roam around and view these scenes from anywhere, and that the materials change appearance according to where you are, due to reflections and light scattering effects. 

“Then, further, if your story can take different paths with myriad outcomes, that’s a whole lot of processing for graphics algorithms to chew on and optimise in order to achieve a real-time, full-motion light-field media experience.”

Virtual Reality interaction with Disney Research


FT: How can we as an industry overcome those challenges?

KM: “Fortunately, there are a lot of similarities in the natural world we are used to seeing around us. We can exploit the fact that things tend to group into common materials and shapes, for example, by sampling the virtual world in a sensibly reduced way—such as through adaptive sparse light-field probes and clever reconstructions for depth and colour panoramas. 

“We also have evolved to see and comprehend our largely consistent world in a way that has human limits of visual perception. We can’t see unlimited detail, for example. It’s hard to tell if reflections are exactly true to the naked eye, we often don’t notice every detail in front of us – even right under our noses – and our peripheral vision is quite limited. 

“So, with care, we can omit or reduce details wherever we know they won’t be noticed. It is still very much a challenge to organise and structure into algorithms that take just a few milliseconds on modern computing devices.”

Disney methodWhere do you see VR storytelling heading next?

KM: “Immersive media is still in the early stages for storytelling but, even while these methods develop, there are four further upcoming avenues of particular excitement. 

“First, being able to finally close the gap between what we see visually in high quality film VFX and what we have full control to impact in real-time video games is converging towards a level of diminishing returns—meaning directors can start to think of how audiences can experience and impact scenes in a story as though they are really there without compromise. 

“Second, having physical effect in the virtual environment simulated accurately without constraints – and equally being reflected back to the participant – with new approaches to haptics. 

“Third, meeting people believably represented in these story spaces is key to opening the medium socially to the widest audiences. This is aided by methods that follow consistencies of human motion and appearance, and can employ prediction to counter latency of network communications over large distances. 

“And, finally, the ability to meet and interact with artificially intelligent characters designed to guide and enhance the story adaptively according to our own traits, changing our understanding of the role stories play in our lives. “




One of your recent projects at Disney Research, IRIDiuM+, reuses movie content directly for VR and games. What are the benefits of that approach?

“IRIDiuM+ is a small step in reaching the full promise of truly deep media. We develop methods to take imagery from the latest movie productions and render frames that normally takes hours to compute into just milliseconds, without costing extensive work on video game optimisations and artist-performed asset reductions. 

“We focus much development on new methods for hardware video decoding and reconstruction, with Babis Koniaris and others, and we expect new accelerated video compression standards will lead to embedding similar methods directly into silicon. 

“We have been very lucky to corral the efforts of many budding young artists, led by Maggie Kosek, to create one-off branching light-field story experiences—providing a glimpse of the future of interactive entertainment. My research team and other scientists at Disney Research are combining our efforts strategically under Markus Gross to lead us to really be on the cusp of a new art form.”

Want to find out more about the latest ways scientists are making VR more immersive? Check out our next article, Exploring infinite walking in virtual reality