Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU usage 0% on M3 Macbook Air #357

Closed
yyzguy opened this issue Aug 16, 2024 · 9 comments
Closed

GPU usage 0% on M3 Macbook Air #357

yyzguy opened this issue Aug 16, 2024 · 9 comments

Comments

@yyzguy
Copy link

yyzguy commented Aug 16, 2024

Using Macbook Air with M3. Installed the onxruntime for Apple Silicon.
Used command line to start with coreml
CPU is running hard 766%
GPU is at 0%

What am I doing wrong?

@e-basaran
Copy link

We're hoping to get a fix. Seems like .onnx doesn't fully support CoreML. Here is what you can do to get a better performance (lowering the CPU usage, and temperature, getting higher fps):

For a better performance, please download inswapper_216.onnx (no fp16) from huggingface, rename it to inswapper_216.fp16.onnx, and replace this file with the one in the modules folder.

Also, instead of installing onnxruntime 1.13.1, try 1.16.3:

pip uninstall onnxruntime onnxruntime-silicon
pip install onnxruntime-silicon==1.16.3

After these steps, you should be getting +10 fps, and no temperature issues. It still does not fully utilize the GPU cores though.

@westNeighbor
Copy link

@Retha23 Can you share the onnx download link?

@e-basaran
Copy link

Here: https://huggingface.co/ezioruan/inswapper_128.onnx/resolve/main/inswapper_128.onnx?download=true

@Retha23 Can you share the onnx download link?

@westNeighbor
Copy link

@Retha23 , hmmm, I thought it's a new inswapper_216.onnx, so it's a typo? what's the difference compared to the host provided?

@yyzguy
Copy link
Author

yyzguy commented Aug 16, 2024

I forgot to rename it, but it seems to have installed and works faster. My clips are too small for me to give a quantitive answer as to speed increase. Also output is a smaller file.

@IMNotMax
Copy link

Works better with a M1Pro with 16Go
Thanx a lot !

@e-basaran
Copy link

e-basaran commented Aug 19, 2024 via email

@TVikg
Copy link

TVikg commented Sep 17, 2024

I tried this but with my Macbook Pro M1 Pro and 16 Gb is still very slow...
It's correct to launch it with --execution-provider coreml ?

@IMNotMax
Copy link

Yes, even with coreml execution provider, it’s slow on a M1Pro 16Go. Not usable for a live for example. (When I tested it, even with the experimental branch btw)

@KRSHH KRSHH closed this as completed Oct 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants