An Introduction to ARCore
The first phase for Android smartphone adoption of Augmented Reality (AR) started in 2014 with a project named Tango, an augmented reality computing platform developed by Google. Tango was only supported by Tango devices (sold by Google), and was only available to a certain number of developers, which is why the adoption was limited. In 2017, Google announced the second phase of AR for Android devices, namely ARCore, a software development kit for building augmented reality experiences.
The imminent advantage of ARCore is that it works without any additional hardware, meaning it can scale across the Android ecosystem. ARCore is already supported by multiple Android and iOS devices and expanding rapidly with new releases.
How does it work?
ARCore uses SDK’s to provide Java/OpenGL (Android), Unity, Unreal and Web with native APIs for all essential AR features:
- Motion tracking: By using a phone’s camera to observe feature points in a room and IMU sensor data, ARCore determines both, the position and orientation (pose) of the phone as it moves. Virtual objects remain accurately placed.
- Environmental understanding: Horizontal surfaces can be detected by using the same feature points used for motion tracking. It’s common for AR objects to be placed on a floor or table.
- Light estimation: Allows observation of the environment’s ambient light, making it possible to light virtual objects in ways that match their surroundings, making their appearance even more realistic.
With these three key technologies you can build entirely new AR experiences or enhance existing apps with AR features.
Basic concepts of ARCore
Understanding the following fundamental concepts of ARCore is crucial to getting a jump-start on building experiences that can make virtual content appear to rest on real surfaces or be attached to real world locations:
Anchor: describes a fixed location and orientation in the real world. To stay at a fixed location in physical space, the numerical description of this position will update as ARCore’s understanding of the space improves.
HitResult: defines an intersection between a ray and estimated real-world geometry.
Plane: describes the current best knowledge of a real-world planar surface.
PlaneHitResult: defines an intersection between a ray and plane geometry.
PointCloud: contains a set of observed 3D points and confidence values.
Pose: represents an immutable rigid transformation from one coordinate frame to another. It always describes the transformation from an object’s local coordinate frame to the world coordinate frame. That is, Poses from ARCore APIs can be thought of as equivalent to OpenGL model matrices. The transformation is defined using a quaternion rotation about the origin, followed by a translation.
Session: manages the AR system state and handles the session lifecycle. This is the main entry point to the ARCore API, allowing the user to create a session, configure it, start/stop it, and most importantly, receive frames that allow access to camera image and device pose.
World coordinate frame
As ARCore’s understanding of the environment changes, it adjusts its model of the world to keep things consistent. When this happens, the numerical location (coordinates) of the camera and anchors can change significantly to maintain appropriate relative positions of the physical locations they represent.
These changes mean that every frame should be considered to be in a completely unique world coordinate frame. The numerical coordinates of anchors and the camera should never be used outside the rendering frame during which they were retrieved. If a position needs to be considered beyond the scope of a single rendering frame, either an anchor should be created or a position relative to a nearby existing anchor should be used.
ARCore is an emerging technology bound to innovate and open up an entirely new array of apps, games and consumer experiences as well as emerging lines of business. Let’s get busy!