Models you can use them on iOS.
Image Classifier
Google Drive Link | Size | Original Project |
---|---|---|
Efficientnetb0 | 22.7 MB | TensorFlowHub |
GAN
Google Drive Link | Size | Original Project |
---|---|---|
UGATIT_selfie2anime | 1.12GB | taki0112/UGATIT |
AnimeGANv2_Hayao | 8.7MB | TachibanaYoshino/AnimeGANv2 |
AnimeGANv2_Paprika | 8.7MB | TachibanaYoshino/AnimeGANv2 |
How to use in a xcode project.
import Vision
lazy var coreMLRequest:VNCoreMLRequest = {
let model = try! VNCoreMLModel(for: modelname().model)
let request = VNCoreMLRequest(model: model, completionHandler: self.coreMLCompletionHandler)
return request
}()
let handler = VNImageRequestHandler(ciImage: ciimage,options: [:])
DispatchQueue.global(qos: .userInitiated).async {
try? handler.perform([coreMLRequest])
}
For visualizing multiArray as image, Mr. Hollance’s “CoreML Helpers” are very convenient. CoreML Helpers
Converting from MultiArray to Image with CoreML Helpers.
func coreMLCompletionHandler(request:VNRequest?、error:Error?){
let = coreMLRequest.results?.first as!VNCoreMLFeatureValueObservation
let multiArray = result.featureValue.multiArrayValue
let cgimage = multiArray?.cgImage(min:-1、max:1、channel:nil)
Apps made by Core ML models. AnimateU