The Simple Steps To Virtual Object Interaction Using ARKit.

Overview

ARKit is basically a scaffold between this present reality and the virtual items. It gives the choice to communicate amongst real and virtual items. This demo application gives how to place an object and how to communicate with your virtual items utilizing gestures and hit testing.

Prerequisites

  • Xcode 9.3
  • iOS 11.3
  • Device with A9 processor

Project Setup

Open Xcode and create a new project. Choose “Augmented Reality App” and fill the required details.

interaction-image1

Apple provide options for Content Technology like SceneKit, SpriteKit, and Metal. Here we will choose SceneKit. If you want to place any 3D object model then Xcode needs to read that 3D object file in SceneKit supported format(.scn) or .dae.

ARKit is a session-based framework. The session contains a scene that renders virtual objects in the real world. For that, ARKit needs to use an iOS device’s camera. So you have to add this to your info.plist file.

Privacy – Camera Usage Description.

interaction-image2

Now in here, we need to set couple of IBOutlet as below with ARSCNView and UILabel, infoLabel for acknowledging user about AR session states and any updates of the node.

interaction-image3

For debugging purpose you can set sceneView.debugOptions = [ARSCNDebugOptions.showFeaturePoints] and can see how ARKit detects surface. When you run the app, you should see a lot of yellow dots on the scene. These are feature points and it is helpful to estimate properties like the orientation and position of physical objects in the current base environment. The more feature points in the area, the better chance ARKit can determine and track the environment.

Now it’s time to set a world-tracking session with a horizontal plane. As you can see in your viewWillAppear method session has been already created and set to run.

So now your method will look like this.

Detect plane and place object

When we detect any surface in ARKit, it will provide ARPlaneAnchor an object. An ARPlaneAnchor object is basically containing information about position & orientation of a real world detected surface.

To know when surface will detect, update or remove , use ARSCNViewDelegate methods which looks like a magic in ARKit. Implement following ARSCNViewDelegate methods so you will be notified when an update is available in sceneview.

ARSessionDelegate protocol provides current tracking state of the camera so you are able to know that your app is ready to detect or not. When you are getting a normal state, you are ready to detect plane. For that implement these delegates.

When plane has been detected, add object onto it. Here we are going to add 3D model named “Shoes_V4.dae”.

You can get a childNodes name from here. Here it is “group1“.

interaction-image4

Now build and run your app. You can see that some surfaces show more feature points and in some area, you can not get much better result compared to others. Surfaces which are shiny or one colored make it difficult for the ARKit to obtain a strong reference point for plane detection and to be able to determine unique points in the environment. If you are unable to see more feature points then move your device around the area and try to detect it with different objects or surfaces. Once ARKit is ready with the detected plane then your object will be added on it.

Change position of object to tap location with UITapGestureRecognizer

For placing an object on tap first add UITapGestureRecognizer in scene view.

Then in the handling of tap gesture add a node at tap position. A node represents the position and the coordinates of an object in a 3D space. Here we set a position of the node to tap position.

For getting the translation of worldTransform add this extension.

Scaling object with UIPinchGestureRecognizer

For zoom-in and zoom-out 3D object we have to change a scale of object while user pinch. For recognize when user pinch on sceneview, add UIPinchGestureRecognizer.

Here we set a maximum zoom scale as 2(200% zoom-out then original object) and minimum scale as 0.5(50% zoom-in then original object). You can set it according to your necessities.

Rotate object using UIPanGestureRecognizer

For rotation of any object using pan gesture add UIPanGestureRecognizer in sceneview.

You can also rotate object using UIRotationGestureRecognizer. But this will recognize a rotation using two fingers. Here we used only one finger to rotate object in sceneview.

Thanks for coming.
If you have enjoyed and learned something valuable from this tutorial, please let me know by sharing this tutorial with your friends.

mm

Shraddha Sojitra iOS Developer

I am an iOS Developer at Yudiz Solutions. I like to work on cool iOS apps and learning new ideas about development. I am passionate about continuous improvement. When I’m not developing, I’m reading and trying new things.

Comments are closed.

Top