Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question re. ANE Usage with Flexible Input Shapes #1764

Open
rsomani95 opened this issue Feb 9, 2023 · 3 comments
Open

Question re. ANE Usage with Flexible Input Shapes #1764

rsomani95 opened this issue Feb 9, 2023 · 3 comments
Labels
Core ML Framework An issue related to the Core ML Framework question Response providing clarification needed. Will not be assigned to a release. (type)

Comments

@rsomani95
Copy link

rsomani95 commented Feb 9, 2023

❓Question

Not sure if this is a framework issue, or one with coremltools. My hunch is the latter, so I'm asking here.

I've exported a model that requires a flexible input shape, and set the default shape to 1. This model doesn't use the ANE at all, and only runs on CPU.

Out of curiosity, I set the input shape to be fixed to 1 to see if the model would run faster. This model uses the GPU / ANE and is significantly faster. Does this mean that ANE usage is out of the window with flexible input shapes, or is there scope to redefine the model to allow it to use the ANE with flexible shapes too?

Unfortunately, I cannot share the model definition publicly.

Fixed input shape:

CleanShot 2023-02-09 at 18 46 12

Flexible input shape:

CleanShot 2023-02-09 at 18 46 15

@rsomani95 rsomani95 added the question Response providing clarification needed. Will not be assigned to a release. (type) label Feb 9, 2023
@TobyRoseman
Copy link
Collaborator

TobyRoseman commented Feb 10, 2023

Not sure if this is a framework issue, or one with coremltools. My hunch is the latter, so I'm asking here.

I think this is much more likely to be an issue with the Core ML Framework. At a high level the coremltools package takes a source model (i.e. a TensorFlow or PyTorch model) and converts that to MIL Ops. The Core ML Framework decides which devices (i.e. CPU, GPU, ANE) runs each op.

For help with the Core ML Framework, you could post or search previous posts in the Apple Developer Forum. Submitting this issue via Feedback Assistant would also be good.

Without steps to reproduce this issue, I don't think there is much we can do here.

@vade
Copy link

vade commented Mar 6, 2023

Filed internal report FB12038163

@junpeiz junpeiz added the Core ML Framework An issue related to the Core ML Framework label Mar 7, 2023
@aseemw
Copy link
Collaborator

aseemw commented Apr 4, 2023

As discussed in #1763 , the model should continue to use ANE with EnumeratedShapes.
Unless using the flexible input shapes causes some layers to be dynamic in which case they might not be supported on the neural engine. If the ops are exactly the same between the static and flexible models (say a fully convolutional model) and the static model runs on the NE but the enumerated shaped flexible model does not, then its likely a bug.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Core ML Framework An issue related to the Core ML Framework question Response providing clarification needed. Will not be assigned to a release. (type)
Projects
None yet
Development

No branches or pull requests

5 participants