How AI is Pushing Virtual Reality To The Next Level

Although some claim VR is just an overhyped trend on its way to dying out, the size of the VR market is expected to grow to USD 62.1 billion by 2027, according to a report by Grand View Research, Inc. Its applications exceed the entertainment sector, extending to both healthcare and education, allowing people to practice difficult procedures in a safe and consequence-free environment, with little compromise to realism.

Both coming from humble beginnings, we have seen the technologies of virtual reality and artificial intelligence continue to grow, becoming exciting technologies with seemingly limitless applications. While AI may not immediately come to mind when you think of virtual reality, it's a field that relies more and more on the power and capabilities afforded by artificial intelligence; the immersive worlds we create and experience would not be possible without it. 

But how exactly is AI used to make virtual reality feel real?

Computer Graphics 

For VR environments to be fully immersive, we must strive towards graphics that can mirror reality. Unfortunately, such high-level graphics are difficult to attain in real-time without running into problems with framerate and stuttering gameplay, which breaks immersion and can cause motion sickness in players. The consequence of this is that the majority of VR experiences must use simplistic graphics to keep the experience as smooth as possible.

Fortunately, computer graphics titans Nvidia have been utilising deep learning techniques to make such dreams possible. "Deep Learning Super Sampling" is a technology developed by Nvidia which allows high-resolution images to be generated for a low-res image input, allowing for high-quality graphics for VR to be less costly than before. By developing AI-based technologies like this and integrating them into our VR experiences, realistic, immersive graphics can be produced with greater efficiency, cost and speed.

More Intelligent Training

VR has proven to be a great tool for training people to perform high-risk tasks without the risk, for example by simulating operations for trainee doctors, and engine failure scenarios for pilots. By implementing AI into VR simulations training could be more intelligently analysed and better respond to the user's actions giving a more customised experience with accurate feedback.

Headset Tracking using Computer Vision

The Oculus Quest, a VR headset made by Facebook-owned company Oculus, employs AI-enabled computer vision to compute the user's movements and ensure that they never wander out of a set playspace without being notified by the system. This is done by using the exterior cameras on the headset to create a virtual map of the room. An AI algorithm will identify recognisable points that can be used to identify the headset's exact position in the room in real-time. This means that any movement you make is translated accurately into the game and if you get too close to a real-life obstacle, the headset will give you a suitable warning.


In-Game AI

While VR games place you directly in the body of the main character it's often jarring to talk to NPCs (Non-Playable Characters) when the voice you hear is not your own. One solution to this problem would be in utilising speech recognition and generation AI to allow us to have realistic interactions with characters, in which they can intelligently respond to what we say and do.

Other ways we can overstep our physical limitations in VR is by utilising AI to intuitively generate game environments based on the dimensions of the user's playing space. A shining example of this would be the VR game "Tea for God" created by the independent developer Void Room. The genius of this game's procedural generation allows the player to walk anywhere within its boundaries without ever walking into their walls, as the turns and sizes of the games rooms and corridors are designed to suit your unique playing space. 

Thumbnail credit: VR Vision