New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build succeed but crash #2

Open
LYToToTo opened this Issue Jun 8, 2017 · 14 comments

Comments

Projects
None yet
8 participants
@LYToToTo

LYToToTo commented Jun 8, 2017

a7a2f8b6-ab5a-4182-9d73-54788dcce4e9

@hollance

This comment has been minimized.

Show comment
Hide comment
@hollance

hollance Jun 8, 2017

Owner

Nice! Are you running this on an actual device?

I don't have an iOS 11 compatible device just yet so I only ran it in the simulator. But I was actually wondering if this would happen on the device.

You see, MobileNets uses a "depthwise convolution" and Metal does not support this currently.

The original model is trained in Caffe and that also doesn't support depthwise convolution. So they set the "groups" property of a regular convolution layer to be the same as the number of output channels, and then it works.

However... in Metal the number of input channels in each group must be a multiple of 4 (as the error message says). But when you set groups == output channels, the number of input channels in each group is 1. And Metal does not accept that.

So here we have a model that works OK on the simulator (where it uses Accelerate instead of Metal Performance Shaders) but not on the GPU.

I will submit this as a bug report to Apple. Thanks for pointing this out!

Owner

hollance commented Jun 8, 2017

Nice! Are you running this on an actual device?

I don't have an iOS 11 compatible device just yet so I only ran it in the simulator. But I was actually wondering if this would happen on the device.

You see, MobileNets uses a "depthwise convolution" and Metal does not support this currently.

The original model is trained in Caffe and that also doesn't support depthwise convolution. So they set the "groups" property of a regular convolution layer to be the same as the number of output channels, and then it works.

However... in Metal the number of input channels in each group must be a multiple of 4 (as the error message says). But when you set groups == output channels, the number of input channels in each group is 1. And Metal does not accept that.

So here we have a model that works OK on the simulator (where it uses Accelerate instead of Metal Performance Shaders) but not on the GPU.

I will submit this as a bug report to Apple. Thanks for pointing this out!

@gwangsc

This comment has been minimized.

Show comment
Hide comment
@gwangsc

gwangsc Jun 8, 2017

Crashed at the same place. Running on a iphone7 plus, ios 11.0.

gwangsc commented Jun 8, 2017

Crashed at the same place. Running on a iphone7 plus, ios 11.0.

@flyingtango

This comment has been minimized.

Show comment
Hide comment
@flyingtango

flyingtango Jun 11, 2017

Crashed at the same place. ipad pro 9.7, ios 11.0.

flyingtango commented Jun 11, 2017

Crashed at the same place. ipad pro 9.7, ios 11.0.

@austingg

This comment has been minimized.

Show comment
Hide comment
@austingg

austingg Jun 12, 2017

@hollance There are TF implemented MobileNet , and the depthwise conv is not implemented with group, I wonder whether that model would work.

austingg commented Jun 12, 2017

@hollance There are TF implemented MobileNet , and the depthwise conv is not implemented with group, I wonder whether that model would work.

@hollance

This comment has been minimized.

Show comment
Hide comment
@hollance

hollance Jun 12, 2017

Owner

@austingg There is no way to convert TensorFlow models to Core ML at the moment (only Keras models). In addition, the mlmodel format does not support depthwise convolution, so even if it were possible to convert TensorFlow, Core ML wouldn't know what to do with these layers.

Owner

hollance commented Jun 12, 2017

@austingg There is no way to convert TensorFlow models to Core ML at the moment (only Keras models). In addition, the mlmodel format does not support depthwise convolution, so even if it were possible to convert TensorFlow, Core ML wouldn't know what to do with these layers.

@austingg

This comment has been minimized.

Show comment
Hide comment
@austingg

austingg Jun 12, 2017

@hollance thanks, that's pity, mlmodel is still limited to basic cnn classication application. we should still implemente some unspported layer by metal.

austingg commented Jun 12, 2017

@hollance thanks, that's pity, mlmodel is still limited to basic cnn classication application. we should still implemente some unspported layer by metal.

@gwangsc

This comment has been minimized.

Show comment
Hide comment
@gwangsc

gwangsc Jun 13, 2017

I think CoreML can leave some interface or callback to allow developers to implement their own layer, and integrate into the CoreML pipeline. As long as they provide interface spec, and user provide kernels based on the spec, it should be doable.

Otherwise, CoreML won't be very useful given the fact that DL field evolves so fast and new layers/networks coming out almost on a monthly basis.

And also, the model size is a real problem. for the 4 models provided on Apple's website, non of them are smaller than 10MB. How could an app developer ship an App with a model about 50~100MB.

gwangsc commented Jun 13, 2017

I think CoreML can leave some interface or callback to allow developers to implement their own layer, and integrate into the CoreML pipeline. As long as they provide interface spec, and user provide kernels based on the spec, it should be doable.

Otherwise, CoreML won't be very useful given the fact that DL field evolves so fast and new layers/networks coming out almost on a monthly basis.

And also, the model size is a real problem. for the 4 models provided on Apple's website, non of them are smaller than 10MB. How could an app developer ship an App with a model about 50~100MB.

@hollance

This comment has been minimized.

Show comment
Hide comment
@hollance

hollance Jun 13, 2017

Owner

@gwangsc I agree. Please file a feature request at https://bugreport.apple.com -- that's the only way Apple will listen...

Owner

hollance commented Jun 13, 2017

@gwangsc I agree. Please file a feature request at https://bugreport.apple.com -- that's the only way Apple will listen...

@XBeg9

This comment has been minimized.

Show comment
Hide comment
@XBeg9

XBeg9 Jun 26, 2017

Any updates here? Could you please post a link to bugreport, I want to track it. Thanks!

XBeg9 commented Jun 26, 2017

Any updates here? Could you please post a link to bugreport, I want to track it. Thanks!

@hollance

This comment has been minimized.

Show comment
Hide comment
@hollance

hollance Jun 27, 2017

Owner

It works on the device now with beta 2, but I'm not sure yet if that's because the model now runs on CPU instead of the GPU, or that Metal uses a workaround for this issue.

Owner

hollance commented Jun 27, 2017

It works on the device now with beta 2, but I'm not sure yet if that's because the model now runs on CPU instead of the GPU, or that Metal uses a workaround for this issue.

@XBeg9

This comment has been minimized.

Show comment
Hide comment
@XBeg9

XBeg9 Jun 27, 2017

@hollance I think it's easy to check in Instruments

XBeg9 commented Jun 27, 2017

@hollance I think it's easy to check in Instruments

@hollance

This comment has been minimized.

Show comment
Hide comment
@hollance

hollance Jun 27, 2017

Owner

@XBeg9 I haven't had much luck with Instruments and compute shaders. But with GPU Frame Capture in Xcode it should be possible to check. I just haven't had the time for it yet.

Owner

hollance commented Jun 27, 2017

@XBeg9 I haven't had much luck with Instruments and compute shaders. But with GPU Frame Capture in Xcode it should be possible to check. I just haven't had the time for it yet.

@Shoshin23

This comment has been minimized.

Show comment
Hide comment
@Shoshin23

Shoshin23 Jul 8, 2017

This seems to be resolved in iOS 11 public beta too. I still have to check whether it's running on the CPU or GPU. I shall do that in a few hours.

Shoshin23 commented Jul 8, 2017

This seems to be resolved in iOS 11 public beta too. I still have to check whether it's running on the CPU or GPU. I shall do that in a few hours.

@huangxf14

This comment has been minimized.

Show comment
Hide comment
@huangxf14

huangxf14 Jan 6, 2018

Is there any update for this problem? I still meet this bug now.

huangxf14 commented Jan 6, 2018

Is there any update for this problem? I still meet this bug now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment