-
Notifications
You must be signed in to change notification settings - Fork 955
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Get Flux working on Apple Silicon #1264
Conversation
we would love to see Flux work on Silicon Mac |
Yup waiting impatiently to try Q versions or maybe the NF4 |
Also should it not be possible to set it to bfloat16 too or set it as an option, if I remember right starting PyTorch 2.3.0 ist is supported on MPS. I did try it once in Invoke. Ok apparently bfloat16 only works on M2 or newer, still would be nice to have it ;) |
Tried using bfloat16 on my M3, but got the following error: RuntimeError: "arange_mps" not implemented for 'BFloat16' |
@DenOfEquity is there anything I should do to get this approved? |
I guess none of the active collaborators/maintainers can actually test what's going on with MPS. Also, some confusion in the linked issue about it working or not, or only with some models. But it doesn't/can't break anything, so if it helps at least sometimes I'm calling it progress. |
for me it seems none of the FP8 checkpoints works, I get: Even if I select the Fp16 T5. The GGUF version I tried did not work either, error: Unsupported type byte size: UInt16 The full FP16 Flux works but it is horribly slow on my M3 Pro, about 6 min for 20 steps. Also not sure why bfloat16 does not work, even with pytorch 2.4.1. or nightly. Upadate: or mybe it does work but the code needs to be different. |
actually if I do this: and use torch nightly, need to recheck 2.4.1., then bfloat16 works it seems as I get this. took 5:40 min on Torch 2.6 nightly works with pytorch 2.4.1. too However it does not matter really as the FP8 and de Q4_1 gguf still give the same error. |
Should fix #1103