-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Seed option feature request and a question about onnx with custom sd model + lora #35
Comments
hi,
1) yes, I will add the ability to specify the seed in the next few days!
2) yes the procedure is correct and the resulting ONNX should contain all
LoRAs in an "immutable" format. The problem is that this ONNX contains
Einsum operations, which are not yet supported by OnnxStream. Everything is
explained better in this PR:
#36
3) yes, I will try to make it easier in the future :-)
4) asking the user to build OnnxStream on his hardware, allows you to use
the MAX_SPEED setting. However I will try to make the build easier!
Thanks, Vito
|
Can either of you provide an example on how to run the onnx2txt.ipynb script to convert a safetensors model to an onnx format suitable for a Pi4? I've been following along with the dreamshaper_8 model and while it does process to a dreamshaper_8.onnx file and a bunch of bias's and weights in a folder, it won't run on the pi because it's missing tokenizer/vocab.txt. onnx2txt seems to be the missing link there, but I'm not sure how to use the ipynb file to process it. Thanks! |
@coldeny Hello, have you successfully converted the "sd model + lora" to onnxstream model? |
@vitoplantamura will it support lora? |
hi,
following the guide in the main README, calling the "load_lora_weights"
method on the "pipe" object before converting to Onnx, should allow you to
produce an OnnxStream model with one or more LoRAs embedded.
Vito
|
Thanks vito~ |
Hello.
I can't believe how this was achieved... Not like I am capable of understanding it either. Congrats!!!
I have a few questions:
1 - Will there be an option like --seed for being able to generate the same image given the same prompts in the future?
2 - At the moment, I have to first grab AUTOMATIC's webui, install https://github.com/AUTOMATIC1111/stable-diffusion-webui-tensorrt and the zip file it wants by finding out the pytorch version the webui uses somehow and then I have to generate a picture with all the lora's I want to use. Only after that image is generated can I generate the ONNX file.
After this file is made, I am not sure if I also have to "Convert ONNX to TensorRT" as stated in readme and possibly run out of VRAM (I don't know, it was not specified. They only said it would be huge. 4 GB VRAM is so little these days that I guess I can't do this one - but given the project name "OnnxStream" maybe I won't have to do this and I really hope I don't have to...)
And then, I have a model "more" compatible with onnxstream but now I need to use the notebook file (https://github.com/vitoplantamura/OnnxStream/blob/master/onnx2txt/onnx2txt.ipynb) to convert this ONNX file to txt and then I get the really compatible thing, right?
And then, this "thing" is model and the loras I used in the model all in one, with no chance to disable, change or enable or combine with other loras... It is basically merged and finalized. Right?
3 - So, will it become easier and more flexible eventually or are we just constrained with hardware? I plan to run this on my hopefully gonna arrive Zero 2 W.
4 - Will you eventually provide binaries for rpi, windows etc. up to date with each commit or a way to update? This may be a trivial question.
I have also checked #29 and I got scared lol. I may eventually try it or just wait in hopes of things getting easier but I don't know if they will ever be... I have never compiled something in my life, this would be my first attempt.
Sorry for this wall of text... This project seems so exciting.
The text was updated successfully, but these errors were encountered: