Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.Sign up
Nice! Are you running this on an actual device?
I don't have an iOS 11 compatible device just yet so I only ran it in the simulator. But I was actually wondering if this would happen on the device.
You see, MobileNets uses a "depthwise convolution" and Metal does not support this currently.
The original model is trained in Caffe and that also doesn't support depthwise convolution. So they set the "groups" property of a regular convolution layer to be the same as the number of output channels, and then it works.
However... in Metal the number of input channels in each group must be a multiple of 4 (as the error message says). But when you set
So here we have a model that works OK on the simulator (where it uses Accelerate instead of Metal Performance Shaders) but not on the GPU.
I will submit this as a bug report to Apple. Thanks for pointing this out!
@austingg There is no way to convert TensorFlow models to Core ML at the moment (only Keras models). In addition, the mlmodel format does not support depthwise convolution, so even if it were possible to convert TensorFlow, Core ML wouldn't know what to do with these layers.
I think CoreML can leave some interface or callback to allow developers to implement their own layer, and integrate into the CoreML pipeline. As long as they provide interface spec, and user provide kernels based on the spec, it should be doable.
Otherwise, CoreML won't be very useful given the fact that DL field evolves so fast and new layers/networks coming out almost on a monthly basis.
And also, the model size is a real problem. for the 4 models provided on Apple's website, non of them are smaller than 10MB. How could an app developer ship an App with a model about 50~100MB.