-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support of fp16 of LCM dreamshaper #9
Comments
Looks like it doesn't like the unet's timestep. The fp16's is a float, the original is a long. |
Yeah that was it: LatentConsistencyDiffuser.cs:198. |
That should be easy enough to support, let me see if I can squeeze it into tomorrows release |
@saddam213 I've been trying to get a PR going, but I don't have access to the IOnnxModel in DiffuseAsync for _onnxModelService.GetInputMetadata. Is that available and I'm just not seeing it? Or will I have to edit OnnxModelService? |
Sorry I missed your PR and already commited a fix 38f60b6 GetInputMetadata is accessible and worked perfect, our implementations were pretty much the same Thanks for the PR |
Latest commit will fix immediate issue for both pipelines, added the functionality to both diffuser base classes but I think implementation should be moved to a shared place as new pipelines will also need this I would assume. Perhaps we need a static helper class for methods like these, as DecodeLatents is the same across both as well |
uh nice, thanks guys! cant wait for the update to test it out |
Hello everyone! If you don't mind, i'll give you some tips on model conversion based on this doc Long story short, if you run fusion optimizer on the model, it will combine many ops into one. So from 3k+ ops it will get to 1k+. That will lead to VRAM/RAM decrease (less GPU buffers allocated for each node input/output) and performance optimizations, since CUDA and DML have fused attention kernels I've been using this script, it already has optimized settings for DML https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16/blob/main/conv_sd_to_onnx.py but with some changes. With these optimizations and fp16 you should be able to run unet with less than 5gb VRAM. You can check results with this model i've converted for WebGPU https://huggingface.co/aislamov/stable-diffusion-2-1-base-onnx/tree/main But if you want maximum performance, you can create two revisions of the model on huggingface. One with max GPU optimizations and another for CPU Feel free to ask me any questions if you have! |
hi! thank u so much for sharing this, sadly i have no idea how to code to do it myself, could u please make some fp16 models for cpu too? lyriel v16, deliberate v2 or v3, epiCRealism are a few good ones, any of them is good, i would like to use and test them out in onnxstack if possible, thanks Also, i assume this lcm model is only for gpu only? could u please make a cpu optimized too? but i will test tomorrow for cpu this one either way to see how it goes! |
LCM fp16 now works very good and it is so fast! but i have no idea what is going on as i used directml and set the device to 0 for unet and the rest on 1 so i think it uses my AMD and Intel gpus [in task manager my intel graphic goes 99% usage so it is mostly this gpu] not the cpu this time, i close it this topic if it is ok now |
Hi!
i just downloaded this fp16 model from here:
https://huggingface.co/aislamov/lcm-dreamshaper-v7-onnx/tree/main
it loads very fast and good but when i push on generate it stops immediately, i mean the model stays loaded but it wont generate anything, could u take a look at it @saddam213 @dakenf please? i am using cpu, so i don't know if its a gpu optimizd model or not
The text was updated successfully, but these errors were encountered: