Apple announced that ARKit will be available on iOS 11 at their company event, WWDC 2017 on June and with the release of iOS 11 on September 19, 2017, ARKit was part of it. Users could download the Xcode version 9.0.1 which includes the iOS 11 and can start creating an Augmented Reality based project.
ARKit simplifies the task of making AR experience by combining device motion tracking, scene processing, camera scene capture and display conveniences. Augmented reality (AR) add 2D or 3D objects to the camera view or live view so that those objects seems like its part of the real world. You can use the ARKit features to produce AR experiences in your app or game. AR game has been really popular amongst the crowd, such as Pokemon go, Zombies run, Ingress etc.
ARKit uses the world and camera coordinates that follow a right-handed convention which means x-axis towards the right, y-axis upwards and z-axis points towards the viewer.In order to track the world coordinate, the ARKit uses a technique called visual-inertial odometry which is the combination of the information merged from iOS device’s motion-sensing hardware with vision analysis of the scene visible from phone’s camera. The world tracking also analyzes and understands the content of the scene. Using the hit testing method it can identify planes horizontal or vertical in the camera image and tracks its position and size.
World tracking will not always give you exact metrics because it relies on the device’s physical environment which is not always consistent or difficult to measure. There will always be a certain degree of error when mapping the real world in the camera view for AR experiences. To build high-quality AR experiences we need to take the following into consideration:
In this blog, I will be explaining on how to quickly start on creating Augmented Reality (AR) app and build an AR experience using facial recognition. The AR app recognizes your face and displays your 3D mock version of you and your professional information in the camera view. The components used in the app are:
Create an Augmented Reality App from Xcode as shown in the diagram below.
Once the project is setup, we need to configure and run the AR Session. There is a ARSCNView already setup which includes ARSession object. The ARSession object manages motion tracking and image processing. And to run this session we need to add ARWorldTrackingConfiguration to the session. The following code sets up the session with the configuration and run:
@IBOutlet var sceneView: ARSCNView!
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
// Create a session configuration
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = .horizontal
// Run the view’s session
sceneView.session.run(configuration)
}
The above code adds plane detection configuration to horizontal and runs the session.
Important: If your app requires ARKit for its core functionality, use the arkit key in the UIRequiredDeviceCapabilities section of your app’s Info.plist file to make your app available only on devices that support ARKit. If AR is a secondary feature of your app, use the isSupported property to determine whether to offer AR-based features.
Once the ARSession is setup you can use SceneKit to place virtual content in the view. The project has a sample file called ship.scn which you can place in the view in the assets directory. The following code will add 3D object into the SCNView:
_// Create a new scene_let scene =SCNScene(named:”art.scnassets/ship.scn”)!
Setthe scene to theview
sceneView.scene= scene
The output of the following code will give you a 3D ship object into the real world view.
Once you have tested that the 3D is working in the camera view, lets set up for face detection using the vision API. The vision API will detect the face, crop the face and send the file to IBM Visual Recognition API to classify the face.
// MARK: — Face detections
private func faceObservation() -> Observable<[(observation: VNFaceObservation, image: CIImage, frame: ARFrame)]> {
return Observable<[(observation: VNFaceObservation, image: CIImage, frame: ARFrame)]>.create{ observer in
guard let frame = self.sceneView.session.currentFrame else {
print(“No frame available”)
observer.onCompleted()
return Disposables.create()
}
// Create and rotate image
let image = CIImage.init(cvPixelBuffer: frame.capturedImage).rotate
let facesRequest = VNDetectFaceRectanglesRequest { request, error in
guard error == nil else {
print(“Face request error: \(error!.localizedDescription)”)
observer.onCompleted()
return
}
guard let observations = request.results as? [VNFaceObservation] else {
print(“No face observations”)
observer.onCompleted()
return
}
// Map response
let response = observations.map({ (face) -> (observation: VNFaceObservation, image: CIImage, frame: ARFrame) in
return (observation: face, image: image, frame: frame)
})
observer.onNext(response)
observer.onCompleted()
}
try? VNImageRequestHandler(ciImage: image).perform([facesRequest])
return Disposables.create()
}
}
Using IBM Visual Recognition API you can upload the cropped face from above and the API will classify and send you a JSON response. To use IBM Watson Visual Recognition API you can register to IBM Bluemix console and create a visual recognition service. Then you should be able to create credentials, which you can use while calling the API. You can use the Watson SDK in your app to use the VisualRecognitionV3 API. To do that follow instruction on here.
private func faceClassification(face: VNFaceObservation, image: CIImage, frame: ARFrame) -> Observable<(classes: [ClassifiedImage], position: SCNVector3, frame: ARFrame)> {
return Observable<(classes: [ClassifiedImage], position: SCNVector3, frame: ARFrame)>.create{ observer in
// Determine position of the face
let boundingBox = self.transformBoundingBox(face.boundingBox)
guard let worldCoord = self.normalizeWorldCoord(boundingBox) else {
print(“No feature point found”)
observer.onCompleted()
return Disposables.create()
}
// Create Classification request
let fileName = self.randomString(length: 20) + “.png”
let pixel = image.cropImage(toFace: face)
//convert the cropped image to UI image
let imagePath = FileManager.default.temporaryDirectory.appendingPathComponent(fileName)
let uiImage: UIImage = self.convert(cmage: pixel)
if let data = UIImagePNGRepresentation(uiImage) {
try? data.write(to: imagePath)
}
let visualRecognition = VisualRecognition.init(apiKey: Credentials.VR_API_KEY, version: Credentials.VERSION)
let failure = { (error: Error) in print(error) }
let owners = [“me”]
visualRecognition.classify(imageFile: imagePath, owners: owners, threshold: 0, failure: failure){ classifiedImages in
print(classifiedImages)
observer.onNext((classes: classifiedImages.images, position: worldCoord, frame: frame))
observer.onCompleted()
}
return Disposables.create()
}
}
Once the face is classified by the visual recognition API, the response of the API is a JSON. The response of the visual recognition has a classification id which is then used to get more information about the classification from the IBM cloudant database. The data is retrieved using the classification id and the JSON response looks like below:
{
“_id”: “c2554847ec99e05ffa8122994f1f1cb4”,
“_rev”: “3-d69a8b26c103a048b5e366c4a6dbeed7”,
“classificationId”: “SanjeevGhimire_334732802”,
“fullname”: “Sanjeev Ghimire”,
“linkedin”: “https://www.linkedin.com/in/sanjeev-ghimire-8534854/",
“twitter”: “https://twitter.com/sanjeevghimire",
“facebook”: “https://www.facebook.com/sanjeev.ghimire",
“phone”: “1–859–684–7931”,
“location”: “Austin, TX”
}
Then we can update the SCNNode with these details as a child node. SCNNode is A structural element of a scene graph, representing a position and transform in a 3D coordinate space, to which you can attach geometry, lights, cameras, or other displayable content
. For each child node, we need to define its font, alignment, and its material. Material includes properties for 3D contents like diffuse content color, specular contents color, double-sided etc. For example, to display the full name from the above JSON that is available in an array can be added to the SCNNode as:
let fullName = profile[“fullname”].stringValue
let fullNameBubble = SCNText(string: fullName, extrusionDepth: CGFloat(bubbleDepth))
fullNameBubble.font = UIFont(name: “Times New Roman”, size: 0.10)?.withTraits(traits: .traitBold)
fullNameBubble.alignmentMode = kCAAlignmentCenter
fullNameBubble.firstMaterial?.diffuse.contents = UIColor.black
fullNameBubble.firstMaterial?.specular.contents = UIColor.white
fullNameBubble.firstMaterial?.isDoubleSided = true
fullNameBubble.chamferRadius = CGFloat(bubbleDepth)
// fullname BUBBLE NODE
let (minBound, maxBound) = fullNameBubble.boundingBox
let fullNameNode = SCNNode(geometry: fullNameBubble)
// Centre Node — to Centre-Bottom point
fullNameNode.pivot = SCNMatrix4MakeTranslation( (maxBound.x — minBound.x)/2, minBound.y, bubbleDepth/2)
// Reduce default text size
fullNameNode.scale = SCNVector3Make(0.1, 0.1, 0.1)
fullNameNode.simdPosition = simd_float3.init(x: 0.1, y: 0.06, z: 0)
And to update the SCNNode:
private func updateNode(classes: [ClassifiedImage], position: SCNVector3, frame: ARFrame) {
guard let person = classes.first else {
print(“No classification found”)
return
}
let classifier = person.classifiers.first
let name = classifier?.name
let classifierId = classifier?.classifierID
// Filter for existent face
let results = self.faces.filter{ $0.name == name && $0.timestamp != frame.timestamp }
.sorted{ $0.node.position.distance(toVector: position) < $1.node.position.distance(toVector: position) }
// Create new face
guard let existentFace = results.first else {
CloudantRESTCall().getResumeInfo(classificationId: classifierId!) { (resultJSON) in
let node = SCNNode.init(withJSON: resultJSON[“docs”][0], position: position)
DispatchQueue.main.async {
self.sceneView.scene.rootNode.addChildNode(node)
node.show()
}
let face = Face.init(name: name!, node: node, timestamp: frame.timestamp)
self.faces.append(face)
}
return
}
// Update existent face
DispatchQueue.main.async {
// Filter for face that’s already displayed
if let displayFace = results.filter({ !$0.hidden }).first {
let distance = displayFace.node.position.distance(toVector: position)
if(distance >= 0.03 ) {
displayFace.node.move(position)
}
displayFace.timestamp = frame.timestamp
} else {
existentFace.node.position = position
existentFace.node.show()
existentFace.timestamp = frame.timestamp
}
}
}
You can find the GitHub link here.
The output of this displays a mock 3D of the face and the professional details about the person.
With the release of ARKit on iOS 11, there is endless opportunity to build solutions that map the virtual data to the real world scenario. Personally, I think Augmented Reality is an emerging technology in the market and developers from various industry are experimenting it on different applications such as games, construction, aviation etc. Augmented Reality will get matured over time and I see that this will be another thing in the tech-industry in foreseeable future.