A sample project demonstrating how to convert a PyTorch model to Core ML format using coremltools and integrate it into an iOS app with SwiftUI.
- Model Conversion: Python script to convert MobileNetV2 (PyTorch) to
.mlpackage - Image Classification: SwiftUI app that classifies photos using Vision + Core ML
- Quantization: Float16 and palettization options for model size reduction
- Photo Picker: Select images from the photo library for classification
| Path | Description |
|---|---|
convert/convert.py |
PyTorch → Core ML conversion script with quantization options |
convert/requirements.txt |
Python dependencies |
Sources/CoreMLDemo/ImageClassifier.swift |
Core ML + Vision wrapper for image classification |
Sources/CoreMLDemo/ContentView.swift |
SwiftUI interface with photo picker and results display |
Sources/CoreMLDemo/CoreMLDemoApp.swift |
App entry point |
Tests/CoreMLDemoTests/ |
Unit tests for ImageClassifier |
- Python 3.8+
- Xcode 16.0+
- iOS 17.0+
cd convert
pip install -r requirements.txt
python convert.pyQuantization options:
python convert.py --quantize float16
python convert.py --quantize palettize --nbits 8- Open
CoreMLDemo.xcodeprojin Xcode - Drag the generated
MobileNetV2.mlpackageinto the project - Select an iOS simulator or device and run
To regenerate the Xcode project from project.yml:
brew install xcodegen
xcodegen generateThis project is licensed under the MIT License - see the LICENSE file for details.