Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Seed option feature request and a question about onnx with custom sd model + lora #35

Open
coldeny opened this issue Oct 22, 2023 · 6 comments

Comments

@coldeny
Copy link

coldeny commented Oct 22, 2023

Hello.

I can't believe how this was achieved... Not like I am capable of understanding it either. Congrats!!!
I have a few questions:

1 - Will there be an option like --seed for being able to generate the same image given the same prompts in the future?

2 - At the moment, I have to first grab AUTOMATIC's webui, install https://github.com/AUTOMATIC1111/stable-diffusion-webui-tensorrt and the zip file it wants by finding out the pytorch version the webui uses somehow and then I have to generate a picture with all the lora's I want to use. Only after that image is generated can I generate the ONNX file.

After this file is made, I am not sure if I also have to "Convert ONNX to TensorRT" as stated in readme and possibly run out of VRAM (I don't know, it was not specified. They only said it would be huge. 4 GB VRAM is so little these days that I guess I can't do this one - but given the project name "OnnxStream" maybe I won't have to do this and I really hope I don't have to...)

And then, I have a model "more" compatible with onnxstream but now I need to use the notebook file (https://github.com/vitoplantamura/OnnxStream/blob/master/onnx2txt/onnx2txt.ipynb) to convert this ONNX file to txt and then I get the really compatible thing, right?

And then, this "thing" is model and the loras I used in the model all in one, with no chance to disable, change or enable or combine with other loras... It is basically merged and finalized. Right?

3 - So, will it become easier and more flexible eventually or are we just constrained with hardware? I plan to run this on my hopefully gonna arrive Zero 2 W.

4 - Will you eventually provide binaries for rpi, windows etc. up to date with each commit or a way to update? This may be a trivial question.

I have also checked #29 and I got scared lol. I may eventually try it or just wait in hopes of things getting easier but I don't know if they will ever be... I have never compiled something in my life, this would be my first attempt.

Sorry for this wall of text... This project seems so exciting.

@vitoplantamura
Copy link
Owner

vitoplantamura commented Oct 25, 2023 via email

@elliot-sawyer
Copy link

Can either of you provide an example on how to run the onnx2txt.ipynb script to convert a safetensors model to an onnx format suitable for a Pi4? I've been following along with the dreamshaper_8 model and while it does process to a dreamshaper_8.onnx file and a bunch of bias's and weights in a folder, it won't run on the pi because it's missing tokenizer/vocab.txt. onnx2txt seems to be the missing link there, but I'm not sure how to use the ipynb file to process it. Thanks!

@noah003
Copy link

noah003 commented Apr 13, 2024

@coldeny Hello, have you successfully converted the "sd model + lora" to onnxstream model?

@noah003
Copy link

noah003 commented May 3, 2024

@coldeny Hello, have you successfully converted the "sd model + lora" to onnxstream model?

@vitoplantamura will it support lora?

@vitoplantamura
Copy link
Owner

vitoplantamura commented May 5, 2024 via email

@noah003
Copy link

noah003 commented May 5, 2024

hi, following the guide in the main README, calling the "load_lora_weights" method on the "pipe" object before converting to Onnx, should allow you to produce an OnnxStream model with one or more LoRAs embedded. Vito

Thanks vito~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants