WebAR: Augmented Reality for the people

Today, most AR experiences are available in dedicated apps. But let’s be honest - people are not very eager to keep downloading lots of separate phone or tablet applications for singular purposes. The clue of popularizing AR technology, is to enable it in places everyone already knows and uses. This is where WebAR comes in - augmented reality natively supported on the web, accessible through your regular browser. In this article, we explain webAR technology and present it through different use cases, including an e-commerce tool we built for our client Extremis.

At the moment, AR experiences indeed perform better in apps, mostly because they can use more smartphone features directly. Recently, however, we’ve seen more and more development efforts to make AR natively supported on the web. 

webar-key-visual-5eaa8cc74223d.jpg

What does it take to enjoy AR in a browser? First of all, a WebXR compatible browser. On Android, your device should be ARCore compatible and use a recent version of Chrome or Firefox. iOS users can use AR Quick Look, an extension that enables users to use ARKit on the web (only in Safari). This API is a group of standards which are used together to support rendering 3D scenes for presenting virtual worlds or for adding digital imagery to the real world.

That’s all for technicalities! Let’s take a look at three selected use cases:

 

Use case 1: augmented e-commerce


E-commerce is one of the domains where the benefit of webAR is directly visible: trying out items in your own home or surroundings makes the shopping experience much more immersive. AR allows you to place the product in its real dimensions, as if it were already in your space. Something that bridges an important gap between online and offline buying experience.

Take an example of a bigger piece of furniture. It’s not always easy to imagine whether it would fit nicely or how well it would match the space. To test an AR solution for this particular problem, we created a proof of concept for our customer Extremis. They design beautiful, high-quality outdoor pieces like tables, chairs or eco-friendly wooden benches. We picked their iconic Gargantua table and benches combo and implemented it in our virtual try-out tool.

Curious to try it yourself? If you have an ARCore or ARKit enabled device, you can view the product in AR by clicking on the icon in the right bottom corner (Firefox or Chrome browser on Android, Safari on iPhone).

How does it work?

To make it work on iOS, you need to use the USDZ format created by Apple and Pixar Animation Studio. There are more options for Android, but in this case we used .glb (.gtlf format).

Google developed a model viewer with AR support. In the example below you can see how we provided a model for Android and one for iOS. Definitely check out this website for more interesting functionalities: modelviewer.dev.

Use case 2: AR in search results


Another interesting use case is AR implemented directly in Google search results. At the moment, when you search for a product, you get to see a picture or a video of it. Now imagine the possibility to place it anywhere in your space at a touch of a button, being able to directly view it in a context relevant to you, play with it, rotate or even try out different colours. Directly from the search engine, no third party apps involved.

webar-product-5ea82cd025bf5.jpg

Use case 3: Face filters


Face filters are probably one of the most widespread forms of AR today. They’re mostly available through the social apps like Instagram, Facebook Messenger or Snapchat, but it seems like they’re going to go beyond these platforms. The teams of  MediaPipe and TensorFlow.js (Google Research unit) have recently released a tensorflow js model that allows you to get a full face mesh (as shown in the gif below) in the browser. Such mesh can be further used to create face filters.  All it takes is a single camera input without the need for a depth sensor. This geometry locates features such as the eyes, nose, and lips, including details such as lip contours and the facial silhouette.

Web AR: what to expect in the near future?


When it comes to the native AR apps, we’ve clearly evolved from basic, marker-based AR experiences to more powerful AR features. Right now, developers are working hard to introduce these for the browsers, too. So, what’s next? Let’s take a look at some of the challenges and new features to expect in WebAR.

Hit testing


A key challenge in implementing augmented reality is ray casting - a method for placing objects in a real-world view, which requires calculating the intersection between the pointer ray and a surface in the real world. That intersection is called a ‘hit’. Determining whether a ‘hit’ has occurred, is called a ‘hit test’.

As of Chrome v82+ hit testing is by default available on Android, without the need of adjusting a Chrome flag.

Plane detection


This feature allows web applications to retrieve data about planes (flat surfaces) present in the user’s environment and use this information to create an accurate, immersive virtual experience thanks to a better mapping or basic occlusion.

https://storage.googleapis.com/chromium-webxr-test/r695783/proposals/phone-ar-plane-detection-anchors.html

For this demo to work, you need 'WebXR Incubations' enabled in chrome://flags.

Depth API


Plane detection is only a part of smart AR - the device has to actually understand the environment, by recognizing its own position and perceiving depth. Based on this knowledge, it can use occlusion to hide parts of an object, so it doesn’t look 'pasted' on the screen. 6D.ai and Google are among the companies that already work to solve this issue. Let's hope Google implements it in the browser, too!

Light estimation


Ideally, AR objects should blend in the real world as naturally as possible. Correct lighting can immensely increase the realism. Information about the lighting of the surrounding can be used for rendering virtual objects - this way, they can be lit under the same conditions as the scene they're placed in. Result? The placed objects feel more realistic and the entire experience becomes more immersive.

Cloud (persistent) anchors


Thanks to Cloud Anchors, your app can allow users to add virtual objects to an AR scene. Multiple users can then simultaneously view and interact with these objects from different positions in a shared physical space. Persistent anchors, in particular, have the ability to last over an extended period of time.

The future is web? WebAR technology is undoubtedly the next big thing. It makes AR easily accessible in the browser and eliminates the requirement of downloading a dedicated application.

This elevates AR from a feature or gimmick that you really need to be interested in having, to something available at a touch of the button, embedded in the the existing platforms you’re used to visiting. Another rich and immersive medium of visual expression is getting democratized.

Getting started with web frameworks

If you want to start with web based augmented reality, these are some good frameworks:

  • A-Frame is an emerging technology from Mozilla, which allows you to create 3D Scenes and Virtual Reality experiences with just a few HTML tags. It’s built on top of WebGL, Three.js and Custom Elements, a part of the emerging HTML Components standard.
  • AR.js is a lightweight library for Augmented Reality on the Web, coming with features like Image Tracking, Location based AR and Marker tracking.
  • 8th Wall Web built entirely using standards-compliant JavaScript and WebGL, it is a complete implementation of 8th Wall’s Simultaneous Localization and Mapping (SLAM) engine, hyper-optimized for real-time AR on mobile browsers. Features include 6-Degrees of Freedom Tracking, Surface Estimation, Lighting, World Points and Hit Tests.

Want to take your webAR to the next level? Don't forget we are here to help you out!