Integrating Machine Learning into iOS Apps: A starter roadmap

Posted on by

Core ML

  • Overview: Core ML is Apple’s framework for integrating machine learning models into apps. It’s optimized for on-device performance, which minimizes memory footprint and power consumption.
  • Key Concepts: Model integration, Prediction, Real-time processing, Model conversion (using Core ML Tools).
  • Use Cases: Image classification, Sentiment analysis, Text prediction, Speech recognition.
  • For further reading: https://developer.apple.com/documentation/coreml

ARKit

  • Overview: ARKit is Apple’s framework for developing augmented reality experiences. It combines device motion tracking, camera scene capture, advanced scene processing, and display conveniences to simplify the task of building an AR experience.
  • Key Concepts: World Tracking, Face Tracking, Image & Object Detection, Environmental understanding, Human occlusion.
  • Use Cases: Virtual object placement, Interactive gaming, Retail and design, and Educational tools.
  • For further reading: https://developer.apple.com/documentation/arkit

RealityKit

  • Overview: RealityKit is a new framework by Apple designed to work with ARKit. It focuses on rendering realistic 3D and AR content with photorealistic rendering, animation, physics, and spatial audio.
  • Key Concepts: Anchoring system, Entity-component system, Ray-casting, Collaborative sessions.
  • Use Cases: Complex AR applications, Realistic simulations, Collaborative AR experiences.
  • For further reading: https://developer.apple.com/documentation/RealityKit

Vision Framework

  • Overview: The Vision Framework uses computer vision to perform face tracking, face detection, landmarks detection, text detection, and barcode recognition.
  • Key Concepts: Image analysis, Object tracking, Text recognition, Barcode detection.
  • Use Cases: Photo tagging, Interactive text features, Retail apps, Security applications.
  • For further reading: https://developer.apple.com/documentation/vision

Important Core ML APIs and Concepts

  • MLModel: The core class that represents a machine learning model in Core ML. You load a model into your app by referencing its generated class, which conforms to the MLModel protocol.
  • MLModelProvider: A protocol that defines methods for loading and updating models dynamically, useful for apps that need to update their models without being recompiled.
  • MLFeatureValue: Represents the data input and output of a model. Core ML supports several data types, including numbers, strings, images (as CVPixelBuffer), multiarrays, dictionaries, and sequences.
  • MLDictionaryFeatureProvider: A convenient way to provide input to a model using a dictionary where keys are the input feature names of the model.
  • MLPredictionOptions: Options to configure model predictions, such as specifying the preferred Metal device for running the model.
  • Vision Framework Integration: For tasks that involve image processing before feeding into a Core ML model, the Vision framework can be used to prepare images. This is often necessary for tasks like object detection, image classification, and more.

Project Idea: Image Classifier App

A simple yet intriguing project could be an Image Classifier iOS app. This app would use the camera to capture images in real time and classify them using a pre-trained Core ML model. For demonstration purposes, let’s use MobileNet, a lightweight model suitable for mobile applications, to classify objects into predefined categories.

Step 1: Add MobileNet to Your Project

First, you need to add a Core ML model to your project. You can download the MobileNet model from Apple’s Core ML models page or use any other model that suits your interest.

Step 2: Core ML Model Integration

Once the model is added to your project, Xcode automatically generates a class for the model. You can then use this class to make predictions.

Organize your code into folders like Models, ViewControllers, Views, and Helpers for better readability.

import UIKit
import Vision
import CoreML

class ImageClassifier {
    private var model: VNCoreMLModel?
    
    init() {
        do {
            let configuration = MLModelConfiguration()
            model = try VNCoreMLModel(for: MobileNet(configuration: configuration).model)
        } catch {
            print("Error setting up Core ML model: \(error)")
        }
    }
    
    func classify(image: UIImage, completion: @escaping (String) -> Void) {
        guard let model = model, let ciImage = CIImage(image: image) else {
            completion("Model or image not available")
            return
        }
        
        let request = VNCoreMLRequest(model: model) { request, error in
            guard let results = request.results as? [VNClassificationObservation],
                  let topResult = results.first else {
                completion("Failed to classify image.")
                return
            }
            
            completion("Classification: \(topResult.identifier), Confidence: \(topResult.confidence)")
        }
        
        let handler = VNImageRequestHandler(ciImage: ciImage)
        do {
            try handler.perform([request])
        } catch {
            print("Failed to perform classification.\n\(error.localizedDescription)")
        }
    }
}

References