Know your FOVAS from your FIVR? What about foveated rendering and volumetric capture?
Foundry Trends’ VR Jargon Buster
Fear not, Foundry Trends is here to dissect 10 of the most confusing words, phrases, acronyms and initialisms that we encounter in the worlds of virtual, augmented and mixed reality.
Adaptively sampled distance fields (ADF)
Let’s start with a particularly complex one. Adaptively sampled distance fields - or ADFs - are a way of representing a shape in 2D or 3D. Importantly, they significantly improve and speed up the ability to manipulate that shape.
Artists can zoom in to levels of data or scale not previously possible, because the operations carried out on ADFs are computationally much more efficient. They're able to sculpt using the algebra built over the ADF, rather than pushing millions of polygons around.
ADFs enable extremely responsive drawing of CGI models, but maintain small file sizes.
Degrees of freedom (DOF)
Positional tracking in virtual reality (VR) - a system’s ability to mimic real headset movement in the virtual world - is limited by the ‘degrees of freedom’ (DOF) offered by the technology in use.
The human body is said to enjoy six DOF: three rotations (pitch, yaw and roll-movement from the neck) and three translations (forward/back, up/down and left/right - movement of the whole body).
The former can be handled by any headset on the market, with the likes of Google Cardboard and Samsung Gear GR offering three DOF. But the latter - with the full six DOF allowing people to feel true agency within an experience - remains the domain of the more advanced devices.
Field of view (FOV)
Put simply, ‘field of view’ is how much of a scene you can see at any one time. For the average, healthy human, horizontal FOV is somewhere between 200° and 220° -including our blurred peripheral vision.
Unfortunately, most headsets on the market right now offer a FOV around half as wide. It’s easy to see why this is a limiting factor - as we’ve explored previously - but it is being addressed by the likes of Starbreeze, whose incredible Star VR headset (while not yet commercially available) has a massive 210° FOV.
Field of view adaptive streaming (FOVAS)
One of biggest challenges in VR is improving image quality while simultaneously reducing the bandwidth needed to stream those experiences to headsets.
Pixvana, the video creation and delivery software company, has a solution. By using ‘viewports’ - a smaller area of the full VR sphere that is rendered in high definition - Pixvana is able to save content creators money by streaming less data to places the viewer rarely looks.
The video is sharpest when the viewer looks straight ahead, and softer as they turn around. As their point of view moves, that area is instantly brought up to high quality - making the experience feel more immersive and life-like. They call this ‘field of view adaptive streaming’, or ‘FOVAS’.
Focal Surface Displays
The human eye effortlessly adapts to changes in perspective-focus on an object in the foreground and the background will blur, and vice versa. But this ‘focal blur’ is very difficult to replicate in VR.
Focal surface displays attempt to do just that, adding a ‘spatially programmable’ element between the usual headset eyepiece and the display-bending the headset’s focus around 3D objects to better mimic the way we see the real world.
This means sharper images and a more natural viewing experience. This video from Oculus Research, the company that pioneers focal surface displays, shows how it all works.
The fovea centralis - from which foveated rendering takes its name - is the part of the human eye that makes sharp, central (or ‘foveal’) vision possible.
It’s a tiny part of our entire FOV, but it’s the most important. You wouldn’t be able to read this article without it, because our peripheral vision only really registers colour and movement, with very little fidelity. Try focusing on the white space to the left of this paragraph and seeing if you can still make sense of any of the words in it.
Foveated rendering is a shortcut of sorts. It mimics the way humans focus on and process the world around them, using gaze detection to tell the VR application where the user is looking and therefore which area of the view to construct in high definition.
Just as the human eye only focuses on a small window of the world around us at any one time, foveated rendering draws the rest of our FOV at lower resolutions. As well as saving an enormous amount of pixel data, the technology better replicates how we truly see the world, creating a deeper and more immersive experience.
Full immersion virtual reality (FIVR)
At some point in the future, we may be able to create virtual worlds so realistic and so immersive that they are indistinguishable from real life. This concept is known as ‘full immersion virtual reality’, or ‘FIVR’ for short.
Foundry Trends has looked previously at the barriers to true immersion, and how a greater sense of ‘presence’ can be achieved. If FIVR is to ever become a reality, there are still a few hurdles to overcome.
Sight and sound are present in most experiences, but some companies are beginning to bring a third sense - touch - into the equation.
The word ‘haptics’ simply relates to any interaction that involves touch. In technology, it’s predominantly been used to provide feedback through vibrations - for everything from rumble packs for games consoles to touchscreens on smartphones.
Currently, haptics in VR involve external devices - gloves, vests and joysticks, for example, provide feedback in line with the experience. A vest might vibrate to simulate being hit but a bullet, or joysticks might vibrate as kick-back from a gunshot.
Disney has attempted to move away from wearables by using bursts of air, while the University of Bristol has invented a way to use ultrasonic waves to create 3D shapes you can see and feel.
Adding a social element to VR can move it from a very private to a very shared experience. Facebook Spaces is possibly the most well-known example - a virtual environment for friends to explore together.
Foundry Trends also spoke to Ben Grossmann, co-founder and CEO of Magnopus, about Coco in VR - an experience based on the Disney Pixar film. “We also discovered that, when you put people into a story without a focus, they feel lonely,” he said.
“By adding a ‘social’ element to Coco, allowing friends to explore together, people started creating a story for themselves. The more people you add, the more fun you have, and this is something we’re now thinking about adding to every experience we do.”
Within any experience, the viewer tends to move with the scene - from a fixed, predetermined viewpoint. To give more freedom and agency in VR, however, volumetric capture has been vital.
Effectively, volumetric capture allows the viewer to move around inside a scene - looking under or behind objects, or walking right up to a character, creating a true sense of presence.
Real-time rendering makes this relatively straightforward in a fully - CGI experience but, for anything involving live action, it’s made possible thanks to multi-camera arrays capturing the entire scene or subject from every conceivable angle.
Love all things virtual? Check out our interview with Academy Award-winning visual effects supervisor and virtual reality (VR) director, Ben Grossmann.