Skip to content

December 16, 2017

What we learned after playing around with ARkit for a week

Development

When Apple announced ARKit earlier this year, the writing was on the wall that AR might be on the verge of blowing up into the mainstream. We put the software to the test to see if it is really living up to the hype.

When WWDC 2017 happened and Apple announced ARKit, the writing was on the wall that AR might be on the verge of blowing up into the mainstream. Unlike the Hololens, Apple’s solution was going to be a lot more affordable (weird), being able to function with just a single camera and the array of different sensors, gyroscopes & accelerometers already present in most modern iOS devices.

So when I saw the ARKit Plugin for Unity, I knew I could port our Hololens portfolio app over to iOS with only minor changes. So that’s what I did, and here are some of my findings along the way:

ARKit plugin on desktop screen

Comparing the two platforms

Apple wouldn’t be Apple if they didn’t add some nifty features, such as automatic lighting detection, where they analyse real-world lighting conditions and then replicate them inside your 3D engine, so things like shadows & colour temperature match.

And since ARKit was built for flat touchscreens instead of immersive Head Mounted Displays, the interaction paradigm would have to change, but thankfully a lot of patterns overlap.

As with the Hololens, there’s a scanning phase (although a lot quicker) and spatial understanding systems for finding flat surfaces. However, one major limitation of ARKit at the moment is that it only provides automatic detection of horizontal surfaces (like tables or floors).

ARKit logo versus Hololens

Unlike the Hololens, users don’t have to precisely aim a reticle to click but can instead just tap objects anywhere on screen. While Apple lacks a system for always-on voice recognition (although I do sometimes wonder how much Siri is listening) but the immediacy & speed of the existing touch interface makes that a minor loss.
 

There’s also API’s for saving & loading World Anchors (i.e. positions of previously placed holograms can be recalled), but in practice, the mobile context of iOS devices and the speed of the spatial recognition makes that feature less necessary than it is on the Hololens. I found that it’s quicker and more convenient to just have the user place holograms wherever they want to, every time they launch the app, rather than having to build UI for recalling (and correcting or resetting) previously stored locations.

Now, where the Hololens is best suited to roughly human-sized holograms placed at medium distances, in my experience ARKit is better for simulating smaller-scaled holograms at closer ranges (the 2D screen makes the stereo-convergence problem just go away). It’s more like those marker-based AR experiences we could already make years ago (*in Flash!*), where 3D objects pop out of magazine ads, rather than for building truly immersive experiences.

But of course, this is all based on my experiences with my crusty iPhone 6S (the oldest iOS device that can run ARKit), so who knows where Apple will take this tech once the iPhone X’s hardware advances trickle down to its relatively more plebeian products.

Read other blog posts

Mobile app framework preview image
Development

All you need to know about mobile app frameworks

When creating a mobile app, an important decision we make early on is which framework to use as the foundation of the project. This involves choosing between developing a native app or a hybrid app. But what exactly is the difference?

Let’s talk!