Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inferred on the cloud or local? #98

Closed
cwy0110 opened this issue Oct 23, 2023 · 2 comments
Closed

inferred on the cloud or local? #98

cwy0110 opened this issue Oct 23, 2023 · 2 comments

Comments

@cwy0110
Copy link

cwy0110 commented Oct 23, 2023

The inference is completed on the cloud server or local smart phone? It is import to me.

@Foul-Tarnished
Copy link

you can do both

local doesn't have a lot of settings, so no LoRa, no embeddings, ...

You can't even use another checkpoint, unless you edit a hardcoded link in docs/models.json (a link to download a converted model from .safetensors to .onnx to .ort) and rebuild the whole app with Android Studio..

You can find preconverted checkpoints here : https://huggingface.co/Androidonnxfork/test/tree/main/fp16fullonnxsdquantized_in_ort

Easier to use this instead
https://github.com/ZTMIDGO/Android-Stable-diffusion-ONNX with a checkpoint from link above

@ShiftHackZ
Copy link
Owner

The inference is done on server side for A1111 and Horde AI configurations. For the Local Diffusion configuration everything is done on local android device (you can even use Local Diffusion in airplane mode if you pre-downloaded local checkpoint model in configuration settings).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants