Unlike the Hololens, users don’t have to precisely aim a reticle to click but can instead just tap objects anywhere on screen. While Apple lacks a system for always-on voice recognition (although I do sometimes wonder how much Siri is listening) but the immediacy & speed of the existing touch interface makes that a minor loss.
There’s also API’s for saving & loading World Anchors (i.e. positions of previously placed holograms can be recalled), but in practice, the mobile context of iOS devices and the speed of the spatial recognition makes that feature less necessary than it is on the Hololens. I found that it’s quicker and more convenient to just have the user place holograms wherever they want to, every time they launch the app, rather than having to build UI for recalling (and correcting or resetting) previously stored locations.
Now, where the Hololens is best suited to roughly human-sized holograms placed at medium distances, in my experience ARKit is better for simulating smaller-scaled holograms at closer ranges (the 2D screen makes the stereo-convergence problem just go away). It’s more like those marker-based AR experiences we could already make years ago (*in Flash!*), where 3D objects pop out of magazine ads, rather than for building truly immersive experiences.
But of course, this is all based on my experiences with my crusty iPhone 6S (the oldest iOS device that can run ARKit), so who knows where Apple will take this tech once the iPhone X’s hardware advances trickle down to its relatively more plebeian products.