top of page

AR PLANET PROTOTYPE

A Unity AR proof of concept that uses Google's open source ARCore Depth API to read from your phone's depth sensor and convert it into an interactable collider.

Download the APK here:

AREnvironmentQR.png

This is achieved by firing an invisible projectile directly from the camera, that destroys itself and creates a setpiece after hitting somewhere in the surrounding environment.

Each time one of these projectiles is fired, the camera updates the environment mesh based on the camera feed to ensure each one is placed accurately. Once a setpiece is placed, it is then anchored to that position and won't move from that point.

As you can see here, the prototype is able to distinguish where on the depth collider a setpiece is being placed, and create grass on the ground, flowers on walls and vines hanging from the ceiling!

There are multiple effects in play to simulate real world effects onto virtual objects:

Previous iterations of this concept used Unity ARCore's built in plane generator, which scanned the area and placed artificial planes on walls and floors, however this proved to be far too inaccurate. I've integrated Google's Depth technology to read a far more accurate source of environment data.

Setting environment setpieces as an anchor will allow their position to stay constant, and act as a guiding point for other AR objects to increase the accuracy of their position whenever the player moves, the objects go off-screen, or the camera loses its position.

By taking the current brightness of the camera feed and calculating the direction of light based off of shadow positioning, the AR engine can simulate the current light intensity and direction in the real world with Unity lighting, and apply it to any object loaded in the scene. Using this technique makes all simulated objects feel much more grounded in the real world.

Light estimation:

Anchor Points:

Collider Mesh:

Project Reflection:

What went well:
Overall this project has been greatly successful! The hardest part was integrating and working with Google's scripts for fairly abstract and complex concepts like reading the camera feed and interpreting depth data from it. However I'm now using these features successfully and have learned a lot about using open source software and interpreting other people's code.

What could've gone better:
While this project is still under construction, too much time was previously spent on iterating and integrating google's AR Depth detection. This overall took around a week's worth of work to accomplish, which could've been spent improving the player's interaction with the environment - for instance animating grass sway as the player walks over it.

bottom of page