Skip to content
This repository has been archived by the owner on Jan 4, 2024. It is now read-only.

Roadmap #22

Closed
2 of 7 tasks
decahedron1 opened this issue Mar 12, 2023 · 3 comments
Closed
2 of 7 tasks

Roadmap #22

decahedron1 opened this issue Mar 12, 2023 · 3 comments
Assignees
Labels
help wanted Extra attention is needed

Comments

@decahedron1
Copy link
Member

decahedron1 commented Mar 12, 2023

  • Img2img - March 2023

  • CLIP layer skip - March 2023

  • Textual inversion - March 2023

  • Upload more pre-converted models - March 2023

  • Scheduler rewrite (Rewrite scheduler system to be more like k-diffusion #16) - ?

  • "Hi-res fix" from A1111 webui - ? (as soon as I can buy a better GPU, I can't test with 6 GB of VRAM...)

  • Web UI - Q2 2023

@decahedron1 decahedron1 added the help wanted Extra attention is needed label Mar 12, 2023
@decahedron1 decahedron1 self-assigned this Mar 12, 2023
@decahedron1 decahedron1 pinned this issue Mar 12, 2023
@oovm
Copy link
Contributor

oovm commented Mar 12, 2023

how about infer prompt words from pictures?

I have completed the part of deep-danbooru inference: oovm/deep-danbooru

And another one is clip inference

@oovm
Copy link
Contributor

oovm commented Mar 12, 2023

Does adding --ema and --simplify-unet improve the generation quality?


Upload anything at: oovm/anything

# anything-v2.1-fp16
rm -rf ./anything-v2.1-fp16
wget https://huggingface.co/swl-models/anything-v2.1/resolve/main/anything-V2.1-pruned-fp16.safetensors -ci
python scripts/sd2pyke.py ./anything-V2.1-pruned-fp16.safetensors ./anything-v2.1-fp16 --fp16 -C v1-inference.yaml
# anything-v3.0-fp16
rm -rf ./anything-v3.0-fp16
wget https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/anything-v3.0-pruned-fp16.safetensors -ci
python scripts/sd2pyke.py ./anything-v3.0-pruned-fp16.safetensors ./anything-v3.0-fp16 --fp16 -C v1-inference.yaml
# anything-v4.0-fp16
rm -rf ./anything-v4.0-fp16
wget https://huggingface.co/andite/anything-v4.0/resolve/main/anything-v4.0-pruned-fp16.safetensors -ci
python scripts/sd2pyke.py ./anything-v4.0-pruned-fp16.safetensors ./anything-v4.0-fp16 --fp16 -C v1-inference.yaml
# anything-v4.5-fp16
rm -rf ./anything-v4.5-fp16
wget https://huggingface.co/andite/anything-v4.0/resolve/main/anything-v4.5-pruned-fp16.ckpt -ci
python scripts/sd2pyke.py ./anything-v4.5-pruned-fp16.ckpt ./anything-v4.5-fp16 --fp16 -C v1-inference.yaml

Upload aom at: oovm/aom

# aom-v1.0-safe-fp16
rm -rf ./aom-v1.0-safe-fp16
wget https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix/AbyssOrangeMix_base.ckpt -c
python scripts/sd2pyke.py ./AbyssOrangeMix_base.ckpt ./aom-v1.0-safe-fp16 --fp16 -C v1-inference.yaml
# aom-v1.0-soft-fp16
rm -rf ./aom-v1.0-soft-fp16
# aom-v1.0-hardcore-fp16
rm -rf ./aom-v1.0-hard-fp16
wget https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors -c
python scripts/sd2pyke.py ./AbyssOrangeMix.safetensors ./aom-v1.0-hard-fp16 --fp16 -C v1-inference.yaml
# aom-v2.0-safe-fp16
rm -rf ./aom-v2.0-safe-fp16
wget https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix2/AbyssOrangeMix2_sfw.safetensors -c
python scripts/sd2pyke.py ./AbyssOrangeMix.safetensors ./aom-v1.0-safe-fp16 --fp16 -C v1-inference.yaml
# aom-v2.0-soft-fp16
rm -rf ./aom-v2.0-soft-fp16
# aom-v2.0-hardcore-fp16
rm -rf ./aom-v2.0-hard-fp16
wget https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix2/Pruned/AbyssOrangeMix2_hard_pruned_fp16_with_VAE.safetensors -c
python scripts/sd2pyke.py ./AbyssOrangeMix2_hard_pruned_fp16_with_VAE.safetensors ./aom-v2.0-hard-fp16 --fp16 -C v1-inference.yaml
# aom-v3.0-safe-fp32
rm -rf ./aom-v3.0-safe-fp32
wget https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3_orangemixs.safetensors -c
python scripts/sd2pyke.py ./AOM3_orangemixs.safetensors ./aom-v3.0-safe-fp16 --fp16 -C v1-inference.yaml

@decahedron1
Copy link
Member Author

how about infer prompt words from pictures?

I have completed the part of deep-danbooru inference: oovm/deep-danbooru

And another one is clip inference

Interesting, I'll have a look at deep-danbooru 🙂
By "clip inference", do you mean CLIP guidance?

Does adding --ema and --simplify-unet improve the generation quality?

--ema may or may not improve quality. I've never thoroughly tested it but I've seen people both recommend and not recommend it for inference so I'm not sure. I did a basic test with AOM2 and the results were identical but YMMV.
--simplify-unet does not affect image quality, it just makes the UNet run faster.

@decahedron1 decahedron1 unpinned this issue Oct 30, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants