Skip to content

bourdakos1/visual-recognition-with-coreml

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

63 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Watson Visual Recognition and Core ML

Note: This repo is now being dedicated to my own fun experiments. For a more vanilla version, use the official repo.

Classify images offline with Watson Visual Recognition and Core ML.

A deep neural network model is trained on the cloud by Watson Visual Recognition. The app then downloads the model which can be used offline by Core ML to classify images. Everytime the app is opened it checks if there are any updates to the model and downloads them if it can.

App Screenshot

Before you begin

Make sure you have these software versions installed on your machine. These versions are required to support Core ML:

  • MacOS 10.11 El Capitan or later
  • iOS 11 or later (on your iPhone or iPad if you want the application to be on your device)
  • Xcode 9 or later
  • Carthage 0.29 or later

Carthage installation

If you don’t have Homebrew on your computer, it’s easier to setup Carthage with the .pkg installer. You can download it here.

Getting the files

Use GitHub to clone the repository locally, or download the .zip file of the repository and extract the files.

Setting up Visual Recognition in Watson Studio

  1. Log in to Watson Studio (dataplatform.ibm.com). From this link you can create an IBM Cloud account, sign up for Watson Studio, or log in.

Training a custom model

For an in depth walkthrough of creating a custom model, check out the Core ML & Watson Visual Recognition Code Pattern.

Installing the Watson Swift SDK

The Watson Swift SDK makes it easy to keep track of your custom Core ML models and to download your custom classifiers from IBM Cloud to your device.

Use the Carthage dependency manager to download and build the Watson Swift SDK.

  1. Open a terminal window and navigate to this project's directory.

  2. Run the following command to download and build the Watson Swift SDK:

    carthage update --platform iOS

Configure your app

  1. Open the project in XCode.
  2. Copy the Model ID of the model you trained and paste it into the modelId property in the CameraViewController.swift file.
  3. Copy your "apikey" from your Visual Recognition service credentials and paste it into the apiKey property in the Credentials.plist file.

Running the app

  1. In Xcode, select the Core ML Vision scheme.
  2. You can run the app in the simulator or on your device.

Note: The visual recognition classifier status must be Ready to use it. Check the classifier status in Watson Studio on the Visual Recognition instance overview page.

What to do next

Try using your own data: Train a Visual Recognition classifier with your own images. For details on the Visual Recognition service, see the links in the Resources section.

Resources

About

🕶 Classify images offline using Watson Visual Recognition and Core ML.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages