Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to run model on iPhone 14 Pro with error: "failed to load ANE model" — works fine from CLI #51

Open
jverkoey opened this issue Dec 9, 2022 · 7 comments

Comments

@jverkoey
Copy link

jverkoey commented Dec 9, 2022

When I run:

let config = MLModelConfiguration()
config.computeUnits = .all
let pipeline try! StableDiffusionPipeline(resourcesAt: modelUrl, configuration: config)

I get the following error:

[espresso] [Espresso::handle_ex_plan] exception=ANECF error: failed to load ANE model. Error=ANECCompile(/var/mobile/Library/Caches/com.apple.aned/tmp/com.featherless.MLDemo/CD3F6A18321CD0468900D511BF6E116C1AC2F5D1DB1D65F480343B1E5551B8A8/7204A653B1634F14166A639585DE3E3EDCFE052221F97F3476ECE9475CD8A5DE/) FAILED: err=(
    CompilationFailure
)
[coreml] Error plan build: -1.
[client] doUnloadModel:options:qos:error:: nil _ANEModel

The model is a converted stablediffusion model. I converted it using the following command line invocation:

python3 -m python_coreml_stable_diffusion.torch2coreml --convert-unet \
  --convert-text-encoder --convert-vae-decoder --convert-safety-checker \
  -o /Users/featherless/MLDemo/new-model   --model-version featherless/test-model \
  --chunk-unet --bundle-resources-for-swift-cli

The same model runs fine when invoked via command line:

swift run StableDiffusionSample "a digital portrait of an astronaut riding a horse, futuristic, highly detailed, HDR, 4k, illustration" \
  --resource-path /Users/featherless/MLDemo/new-model/Resources  \
  --seed=1235 --output-path /Users/featherless/MLDemo/output

Environment

Xcode Version 14.1 (14B47b)
Apple M1 Max, Ventura 13.1
iPhone 14 Pro, iOS 16.1.2

@jverkoey jverkoey changed the title Unable to run model on iPhone 14 Pro, works fine from CLI Unable to run model on iPhone 14 Pro with error: "failed to load ANE model" — works fine from CLI Dec 9, 2022
@jverkoey
Copy link
Author

jverkoey commented Dec 9, 2022

Ah, I tried rebuilding the model again and this time I'm getting an Terminated due to memory issue error on the device now.

@jverkoey
Copy link
Author

jverkoey commented Dec 9, 2022

@jverkoey
Copy link
Author

jverkoey commented Dec 9, 2022

Switching to config.computeUnits = .cpuAndNeuralEngine based on the readme recommendation fixed the memory issue failure, but now I'm back to getting the "failed to load ANE model" error :(

@jverkoey
Copy link
Author

jverkoey commented Dec 9, 2022

Using config.computeUnits = .cpuOnly fixes the compilation error, but now I'm running out of memory again. It looks like it gets exhausted when the Image Decoder is loaded, so I wonder if it's possible to perhaps lazily load that model...

@outcoldman
Copy link

See #49 (comment)

@jverkoey
Copy link
Author

jverkoey commented Dec 9, 2022

Ah interesting, thank you! It looks like "Personal development teams do not support the Extended Virtual Addressing capability." unfortunately; did you find that that was required?

@brandonkoch3
Copy link

brandonkoch3 commented Dec 10, 2022

Ah interesting, thank you! It looks like "Personal development teams do not support the Extended Virtual Addressing capability." unfortunately; did you find that that was required?

I'm not entirely sure this is accurate (I'm able to enable Extended Virtual Addressing Capability on a personal development team, though perhaps you saw something that indicates an app would be reject from the App Store if it uses this capability)? Nonetheless, to get this to run on iPhone 14 Pro Max for me (and not receive the memory termination), I have to create the pipeline using the MLComputeUnit of .cpuAndGPU, like so;

let config = MLModelConfiguration()
config.modelDisplayName = model.name
config.computeUnits = .cpuAndGPU
let pipeline = try StableDiffusionPipeline(resourcesAt: resourceURL, configuration: config, disableSafety: false)
...

Further, I also had to enable the "Increased Memory Limit" entitlement. I followed the instructions in this post from Quinn at Apple Developer Support on the Apple Developer Forums, following the link provided in that post and taking the steps to enable "Increased Memory Limit" (which doesn't show up as a capability in Xcode's Signing & Capabilities like Extended Virtual Addressing does). Between enabling "Increased Memory Limit" and "Extended Virtual Addressing," and building the pipeline using .cpuAndGPU, I'm able to generate images on-device using Stable Diffusion 2, for example.

(Most of this is from the comment that @outcoldman referenced in the link above).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants