What they don't tell you about 360° VR production
July 9, 2016
To help you get your head around the jargon du jour, we’ve pulled together this handy guide to what all these acronyms actually mean, and why they are not the same thing.
The Wikipedia view: Virtual reality (VR), which can be referred to as immersive multimedia or computer-simulated reality, replicates an environment that simulates a physical presence in places in the real world or an imagined world, allowing the user to interact in that world.
Foundry’s interpretation: Virtual reality is the umbrella term for all immersive experiences, which could be created using purely real-world content, purely synthetic content or a hybrid of both.
This is where the industry is getting excited right now. Content-viewing hardware, a.k.a. head-mounted displays (HMDs), ranges from Google Cardboard right up to HTC Vive. The market here is hot, hot, hot and the media is full of news about launches. Second only to excitement about headsets is excitement about cameras. Nokia OZO launched in December, GoPro has its Odyssey—a collaboration with Google Jump, Ricoh has Theta, and there’s also Bublcam and Giroptic.
The Wikipedia view: Immersive videos, more recently known as 360° videos or 360 degree videos, are video recordings of a real-world scene, where the view in every direction is recorded at the same time. During playback the viewer has control of the viewing direction.
Foundry’s interpretation: 360° video is an immersive experience using pre-filmed real-world content as the central media. 360° video is a version of VR created with only real-world content.
Here lies a lot of confusion as the industry deliberates on the definition of terminology. The upshot of this debate is that some say that 360° video is not the same as “real VR” and the two terms are not interchangeable.
Our view is that 360° video, as an immersive experience, is one type of VR that sits happily alongside non-real-world content for VR, which we’ll get onto now.
That brings us nicely to CG VR, which as the name suggests refers to VR content that is computer-generated (i.e. not real-world). Wikipedia doesn’t have a direct definition for CG VR so we’ll jump straight into our own view.
Foundry interpretation: CG VR is an immersive experience created entirely from computer-generated content. CG VR can be either pre-rendered and therefore not reactive—in this way it is very similar to 360° video—or rendered in real time using a games engine.
There is also a third type of VR, which is a hybrid between 360° video and CG, where an immersive experience is created using a blend of both content types. Much like in the film industry today there’s no real name for this ‘third way’ of creation, but audiences are used to the concept of visuals being created using a combination of both real-world and CG content. Some of the most exciting VR content being created today sits in this third category.
As if it wasn’t all murky enough, beyond the “what is VR?” debate there is a whole conversation going on about AR (augmented reality) vs. MR (mixed reality).
For the most part, in the realm of the consumer, the term “mixed reality” seems to be fading out in favour of “augmented reality”. This comes down to the fact that the focus on VR has meant the distinction between MR and AR hasn’t been clearly enough drawn yet; this means they are currently being used interchangeably, and whenever that happens, one term will inevitably be favored over the other. Right now, AR is winning.
However, there is a difference between two and we feel it’s worth addressing that now, so here goes.
The Wikipedia view: Augmented reality (AR) is a live, direct or indirect view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data.
Foundry interpretation: Augmented reality is an overlay of content on the real world, but that content is not anchored to or part of it. The real-world content and the CG content are not able to respond to each other.
IKEA has developed a table as part of its concept kitchen that suggests recipes based on the ingredients on the table, which is a great example of AR working in the real world, potentially. Google Glass was a first attempt from Google to bring augmented reality to consumers and we’d expect to see more of this in the future.
The Wikipedia view: Mixed reality (MR)—sometimes referred to as hybrid reality—is the merging of real and virtual worlds to produce new environments and visualisations where physical and digital objects co-exist and interact in real time.
Foundry’s interpretation: Mixed reality is an overlay of synthetic content on the real world that is anchored to and interacts with the real world—picture surgeons overlaying virtual ultrasound images on their patient while performing an operation, for example. The key characteristic of MR is that the synthetic content and the real-world content are able to react to each other in real time.
Hardware associated with mixed reality includes Microsoft’s HoloLens, which is set to be big in MR—although Microsoft have dodged the AR/MR debate by introducing yet another term: “holographic computing”. Microsoft has just announced a HoloLens emulator for developers so you can make applications for the new tech. Read more about that over on TechCrunch.
Of all the realities we’ve talked about in this article, mixed reality seems like the furthest from fruition. However, it’s not impossible to imagine a future where synthetic content will be able to react to and even interact with the real world in some way.