Use Machine Learning Models to build an Image Classifier with SwiftUI
Learn how to use Core ML and SwiftUI to build a simple image classifier iOS app that can perform machine learning tasks.
First introduced in WWDC 2019, SwiftUI helps you build great-looking apps across all Apple platforms with the power of Swift — and surprisingly little code. Core ML is Apple’s framework for integrating machine learning models into iOS, macOS, and tvOS apps. It allows developers to easily add machine learning functionality to their apps, such as image classification, natural language processing, and object detection.
Core ML applies a machine learning algorithm to a set of training data to create a model. You use a model to make predictions based on new input data. Models can accomplish a wide variety of tasks that would be difficult or impractical to write in code. For example, you can train a model to categorize photos, or detect specific objects within a photo directly from its pixels. — From Apple Developer Website
Core ML supports a variety of machine learning models, including neural networks, tree ensembles, support vector machines, and generalized linear models. Core ML requires the Core ML model format (models with a .mlmodel
file extension).