As iOS 11 was released this gave the developers options to integrate trained ML models into their applications directly. Core ML processes the input, for example of an image and then use the mlmodel to predict the result based labels. This project we used 3 different plush animals for predicting which animal we currently are seeing, using the ARKIT. For training process I used Microsoft Custom Vision AI for creating the trained mlmodel, which is free to use.
The model contains 4 types of training datasets:
- Squirrel
- Squid
- Bear
- Nothing
- Take several pictures in different angles of the object/objects
- Save the stored images in different folders, e.g. all images of a bear in a folder etc.
- Now that we have all images, we start the training by creating firstly a Microsoft account and then log in
- Create a new account and put one folder with each same object images with the tag
- After have added all images, we start the training and later export the .mlmodel file and insert it into xcode project
- Microsoft Custom Vision for Creating mlmodel: https://www.customvision.ai