Photo Detector with SwiftUI and Vision
-
Updated
Jun 4, 2024 - Swift
Photo Detector with SwiftUI and Vision
Simple app that use CoreML and Inceptionv3 model to check is there a pizza in the photo
Open Source Core ML project capable of identifying various types of food. Cloning this project will result in build failures as certain files such as the model itself are too large to upload.
CoreML and Machine Learning - GitHub of The App Brewery : https://github.com/appbrewery/SeeFood-iOS13-Completed
iOS Application with inceptionV3 for object Image Classification
iOS app that demonstrates Apple's CoreML and Vision frameworks in action using pre-trained YOLOv3 and Inceptionv3 .mlmodels.
Detect objects using machine learning.
A recreation of the Silicon Valley series "Not Hotdog" app
Just build this simple app to check CoreML feature in MLTokyo Meetup
Object Classification using CoreML
🎥 iOS11 demo application for dominant objects detection.
Uses iOS 11 and Apple's CoreML to add nutrition data to your food diary based on pictures. CoreML is used for the image recognition (Inceptionv3). Alamofire (with CocoaPods) is used for REST requests against the Nutritionix-API for nutrition data.
Add a description, image, and links to the inceptionv3 topic page so that developers can more easily learn about it.
To associate your repository with the inceptionv3 topic, visit your repo's landing page and select "manage topics."