inceptionv3
Here are 15 public repositories matching this topic...
Uses iOS 11 and Apple's CoreML to add nutrition data to your food diary based on pictures. CoreML is used for the image recognition (Inceptionv3). Alamofire (with CocoaPods) is used for REST requests against the Nutritionix-API for nutrition data.
-
Updated
Sep 22, 2017 - Swift
🎥 iOS11 demo application for dominant objects detection.
-
Updated
Oct 2, 2017 - Swift
Object Classification using CoreML
-
Updated
Oct 16, 2017 - Swift
Just build this simple app to check CoreML feature in MLTokyo Meetup
-
Updated
Apr 21, 2018 - Swift
A recreation of the Silicon Valley series "Not Hotdog" app
-
Updated
Jun 11, 2018 - Swift
Detect objects using machine learning.
-
Updated
Jul 13, 2018 - Swift
-
Updated
Apr 26, 2020 - Swift
iOS app that demonstrates Apple's CoreML and Vision frameworks in action using pre-trained YOLOv3 and Inceptionv3 .mlmodels.
-
Updated
Nov 24, 2020 - Swift
iOS Application with inceptionV3 for object Image Classification
-
Updated
Feb 12, 2021 - Swift
CoreML and Machine Learning - GitHub of The App Brewery : https://github.com/appbrewery/SeeFood-iOS13-Completed
-
Updated
Mar 22, 2021 - Swift
Open Source Core ML project capable of identifying various types of food. Cloning this project will result in build failures as certain files such as the model itself are too large to upload.
-
Updated
Sep 30, 2022 - Swift
Simple app that use CoreML and Inceptionv3 model to check is there a pizza in the photo
-
Updated
Feb 15, 2024 - Swift
Photo Detector with SwiftUI and Vision
-
Updated
Jun 4, 2024 - Swift
Improve this page
Add a description, image, and links to the inceptionv3 topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the inceptionv3 topic, visit your repo's landing page and select "manage topics."