At the moment, AR experiences indeed perform better in apps, mostly because they can use more smartphone features directly. Recently, however, we’ve seen more and more development efforts to make AR natively supported on the web.
What does it take to enjoy AR in a browser? First of all, a WebXR compatible browser. On Android, your device should be ARCore compatible and use a recent version of Chrome or Firefox. iOS users can use AR Quick Look, an extension that enables users to use ARKit on the web (only in Safari). This API is a group of standards which are used together to support rendering 3D scenes for presenting virtual worlds or for adding digital imagery to the real world.
That’s all for technicalities; let’s now take a look at three selected use cases.
E-commerce is one of the domains where the benefit of webAR is directly visible: trying out items in your own home or surroundings makes the shopping experience much more immersive. AR allows you to place the product in its real dimensions, as if it were already in your space. Something that bridges an important gap between online and offline buying experience.
Take an example of a bigger piece of furniture. It’s not always easy to imagine whether it would fit nicely or how well it would match the space. To test an AR solution for this particular problem, we created a proof of concept for our customer Extremis. They design beautiful, high-quality outdoor pieces like tables, chairs or eco-friendly wooden benches. We picked their iconic Gargantua table and benches combo and implemented it in our virtual try-out tool.
Curious to try it yourself? If you have an ARCore or ARKit enabled device, you can view the product in AR by clicking on the icon in the right bottom corner (Firefox or Chrome browser on Android, Safari on iPhone).
To make it work on iOS, you need to use the USDZ format created by Apple and Pixar Animation Studio. There are more options for Android, but in this case we used .glb (.gtlf format).
Google developed a model viewer with AR support. In the example below you can see how
we provided a model for Android and one for iOS. Definitely check out this website for more interesting functionalities: https://modelviewer.dev/
<model-viewer src="gargantua.glb" ios-src="gargantua.usdz" alt="Gargantua" autoplay shadow-intensity="1" auto-rotate camera-controls ar></model-viewer>
Another interesting use case is AR implemented directly in Google search results. At the moment, when you search for a product, you get to see a picture or a video of it. Now imagine the possibility to place it anywhere in your space at a touch of a button, being able to directly view it in a context relevant to you, play with it, rotate or even try out different colours. Directly from the search engine, no third party apps involved.
Face filters are probably one of the most widespread forms of AR today. They’re mostly available through the social apps like Instagram, Facebook Messenger or Snapchat, but it seems like they’re going to go beyond these platforms. The teams of MediaPipe and TensorFlow.js (Google Research unit) have recently released a tensorflow js model that allows you to get a full face mesh (as shown in the gif below) in the browser. Such mesh can be further used to create face filters. All it takes is a single camera input without the need for a depth sensor. This geometry locates features such as the eyes, nose, and lips, including details such as lip contours and the facial silhouette.
When it comes to the native AR apps, we’ve clearly evolved from basic, marker-based AR experiences to more powerful AR features. Right now, developers are working hard to introduce these for the browsers, too. So, what’s next? Let’s take a look at some of the challenges and new features to expect in WebAR.
A key challenge in implementing augmented reality is ray casting - a method for placing objects in a real-world view, which requires calculating the intersection between the pointer ray and a surface in the real world. That intersection is called a ‘hit’. Determining whether a ‘hit’ has occurred, is called a ‘hit test’.
As of Chrome v82+ hit testing is by default available on Android, without the need of adjusting a Chrome flag.
This example has been made with A-Frame and works in Chrome (v82+) and in the WebXRViewer app on iPhone.
This feature allows web applications to retrieve data about planes (flat surfaces) present in the user’s environment and use this information to create an accurate, immersive virtual experience thanks to a better mapping or basic occlusion.
For this demo to work you need "WebXR Incubations" enabled in chrome://flags
Plane detection is only a part of smart AR - the device has to actually understand the environment, by recognizing its own position and perceiving depth. Based on this knowledge, it can use occlusion to hide parts of an object, so it doesn’t look ‘pasted’ on the screen. 6D.ai and Google are among the companies that already work to solve this issue. Let's hope Google implements it in the browser, too!
Ideally, AR objects should blend in the real world as naturally as possible. Correct lighting can immensely increase the realism. Information about the lighting of the surrounding can be used for rendering virtual objects - this way, they can be lit under the same conditions as the scene they're placed in. Result? The placed objects feel more realistic and the entire experience becomes more immersive.
Thanks to Cloud Anchors, your app can allow users to add virtual objects to an AR scene. Multiple users can then simultaneously view and interact with these objects from different positions in a shared physical space. Persistent anchors, in particular, have the ability to last over an extended period of time.
This elevates AR from a feature or gimmick that you really need to be interested in having, to something available at a touch of the button, embedded in the the existing platforms you’re used to visiting. Another rich and immersive medium of visual expression is getting democratized.
A-Frame is an emerging technology from Mozilla, which allows you to create 3D Scenes and Virtual Reality experiences with just a few HTML tags. It’s built on top of WebGL, Three.js and Custom Elements, a part of the emerging HTML Components standard.
AR.js is a lightweight library for Augmented Reality on the Web, coming with features like Image Tracking, Location based AR and Marker tracking.