Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Created a SplitUNet to run on 8 gbs vram #61

Open
wants to merge 10 commits into
base: develop
Choose a base branch
from

Conversation

neonsecret
Copy link

I know the repo might be untidy but we can decide in what form the optimization should exist to be a part of the bigger picture

@claychinasky
Copy link

I'm getting bad results with this, for example photo of a car gives this,

test

Reloading t5
Encoded. Running 1st stage
100%|██████████████████████████████████████████████████| 50/50 [02:13<00:00,  2.68s/it]
Encoding prompts..
Pipelines loaded with `torch_dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference.
Pipelines loaded with `torch_dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference.
Pipelines loaded with `torch_dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference.

@neonsecret
Copy link
Author

neonsecret commented May 1, 2023

I'm getting bad results with this, for example photo of a car gives this,

test

Reloading t5
Encoded. Running 1st stage
100%|██████████████████████████████████████████████████| 50/50 [02:13<00:00,  2.68s/it]
Encoding prompts..
Pipelines loaded with `torch_dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference.
Pipelines loaded with `torch_dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference.
Pipelines loaded with `torch_dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference.

can you attach your interface screenshot with input settings please? it may have just been a bad seed

@neonsecret
Copy link
Author

image

@neonsecret neonsecret mentioned this pull request May 1, 2023
@claychinasky
Copy link

it is not the seed, I tried different seeds and different prompts as well.
Tried with default settings mostly and with smart100 too, just only one time, I got something resembles a photo (since its low res its hard to tell) but after resizing it in phase two, it turned into an image similar to image I posted.

@claychinasky
Copy link

and this is the pip freeze :

absl-py==1.4.0
accelerate==0.17.1
addict==2.4.0
aenum==3.1.12
aiofiles==23.1.0
aiohttp==3.8.4
aiosignal==1.3.1
altair==4.2.2
antlr4-python3-runtime==4.9.3
anyio==3.6.2
appdirs==1.4.4
argcomplete==3.0.5
astroid @ file:///home/conda/feedstock_root/build_artifacts/astroid_1682340084254/work
async-timeout==4.0.2
attrs==23.1.0
autopep8 @ file:///home/conda/feedstock_root/build_artifacts/autopep8_1677845287175/work
Babel==2.12.1
basicsr==1.4.2
beautifulsoup4==4.11.2
blendmodes==2022
boltons==23.0.0
cachetools==5.3.0
certifi==2022.12.7
cffi==1.15.1
charset-normalizer==3.1.0
clean-fid==0.1.29
click==8.1.3
clip @ git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1
cmake==3.26.3
colorama @ file:///home/conda/feedstock_root/build_artifacts/colorama_1666700638685/work
contourpy==1.0.7
cssselect2==0.7.0
cycler==0.11.0
dateparser==1.1.8
deepfloyd-if @ file:///home/clay/repos/IF
deprecation==2.1.0
diffusers==0.16.1
dill @ file:///home/conda/feedstock_root/build_artifacts/dill_1666603105584/work
distlib==0.3.6
docopt==0.6.2
docstring-to-markdown @ file:///home/conda/feedstock_root/build_artifacts/docstring-to-markdown_1679424273982/work
einops==0.4.1
entrypoints==0.4
facexlib==0.3.0
fastapi==0.94.0
ffmpy==0.3.0
filelock==3.11.0
filterpy==1.4.5
flake8 @ file:///home/conda/feedstock_root/build_artifacts/flake8_1669396691980/work
flatbuffers==23.3.3
font-roboto==0.0.1
fonts==0.0.3
fonttools==4.39.3
frozenlist==1.3.3
fsspec==2023.4.0
ftfy==6.1.1
fvcore==0.1.5.post20221221
gdown==4.7.1
gfpgan==1.3.8
gitdb==4.0.10
GitPython==3.1.30
google-auth==2.17.3
google-auth-oauthlib==1.0.0
gradio==3.23.0
gradio_client==0.1.3
grpcio==1.54.0
gruut==2.3.4
gruut-ipa==0.13.0
gruut-lang-en==2.0.0
h11==0.14.0
hjson==3.1.0
httpcore==0.17.0
httpx==0.24.0
huggingface-hub==0.14.1
idna==2.10
imageio==2.28.0
imageio-ffmpeg==0.4.8
importlib-metadata==6.6.0
inflection==0.5.1
injector==0.20.1
iopath==0.1.9
isort @ file:///home/conda/feedstock_root/build_artifacts/isort_1675033873689/work
jedi @ file:///home/conda/feedstock_root/build_artifacts/jedi_1669134318875/work
Jinja2==3.1.2
jsonlines==1.2.0
jsonmerge==1.8.0
jsonschema==4.17.3
kiwisolver==1.4.4
kornia==0.6.7
lark==1.1.2
lazy-object-proxy @ file:///home/conda/feedstock_root/build_artifacts/lazy-object-proxy_1672877787898/work
lazy_loader==0.2
lightning-utilities==0.8.0
linkify-it-py==2.0.0
lit==16.0.2
llvmlite==0.39.1
lmdb==1.4.1
lpips==0.1.4
markdown-it-py==2.2.0
MarkupSafe==2.1.2
matplotlib==3.7.1
mccabe @ file:///home/conda/feedstock_root/build_artifacts/mccabe_1643049622439/work
mdit-py-plugins==0.3.3
mdurl==0.1.2
mediapipe==0.9.3.0
moviepy==1.0.3
multidict==6.0.4
mypy-extensions==1.0.0
networkx==2.8.8
num2words==0.5.12
numba==0.56.4
numpy==1.23.3
nvidia-cublas-cu11==11.10.3.66
nvidia-cuda-cupti-cu11==11.7.101
nvidia-cuda-nvrtc-cu11==11.7.99
nvidia-cuda-runtime-cu11==11.7.99
nvidia-cudnn-cu11==8.5.0.96
nvidia-cufft-cu11==10.9.0.58
nvidia-curand-cu11==10.2.10.91
nvidia-cusolver-cu11==11.4.0.1
nvidia-cusparse-cu11==11.7.4.91
nvidia-nccl-cu11==2.14.3
nvidia-nvtx-cu11==11.7.91
omegaconf==2.2.3
open-clip-torch @ git+https://github.com/mlfoundations/open_clip.git@bb6e834e9c70d9c27d0dc3ecedeebeaeb1ffad6b
opencv-contrib-python==4.7.0.72
opencv-python==4.7.0.72
orjson==3.8.10
packaging==23.1
pandas==2.0.1
parso @ file:///home/conda/feedstock_root/build_artifacts/parso_1638334955874/work
peewee==3.16.0
piexif==1.1.3
Pillow==9.4.0
pkgconfig==1.5.5
platformdirs==3.2.0
pluggy @ file:///home/conda/feedstock_root/build_artifacts/pluggy_1667232663820/work
portalocker==2.7.0
proglog==0.1.10
protobuf==3.20.0
psutil==5.9.4
pyasn1==0.5.0
pyasn1-modules==0.3.0
pycodestyle @ file:///home/conda/feedstock_root/build_artifacts/pycodestyle_1669306857274/work
pycparser==2.21
pydantic==1.10.7
pydocstyle @ file:///home/conda/feedstock_root/build_artifacts/pydocstyle_1673997487070/work
pydub==0.25.1
pyflakes @ file:///home/conda/feedstock_root/build_artifacts/pyflakes_1669319921641/work
pylint @ file:///home/conda/feedstock_root/build_artifacts/pylint_1682377695563/work
pyparsing==3.0.9
pyre-extensions==0.0.23
pyrsistent==0.19.3
PySocks==1.7.1
python-crfsuite==0.9.9
python-dateutil==2.8.2
python-lsp-jsonrpc @ file:///home/conda/feedstock_root/build_artifacts/python-lsp-jsonrpc_1618530352985/work
python-lsp-server @ file:///home/conda/feedstock_root/build_artifacts/python-lsp-server-meta_1680545288995/work
python-multipart==0.0.6
pytoolconfig @ file:///home/conda/feedstock_root/build_artifacts/pytoolconfig_1675124745143/work
pytorch-lightning==1.9.4
pytz==2023.3
pytz-deprecation-shim==0.1.0.post0
PyWavelets==1.4.1
PyYAML==6.0
realesrgan==0.3.0
regex==2023.3.23
requests==2.29.0
requests-oauthlib==1.3.1
resize-right==0.0.2
rope @ file:///home/conda/feedstock_root/build_artifacts/rope_1674988456931/work
rsa==4.9
Rx==3.2.0
safetensors==0.3.0
scikit-image==0.19.2
semantic-version==2.10.0
Send2Trash==1.8.0
sentencepiece==0.1.98
six==1.16.0
smmap==5.0.0
sniffio==1.3.0
snowballstemmer @ file:///home/conda/feedstock_root/build_artifacts/snowballstemmer_1637143057757/work
sounddevice==0.4.6
soupsieve==2.4.1
starlette==0.26.1
svglib==1.5.1
tabulate==0.9.0
tb-nightly==2.13.0a20230426
tensorboard-data-server==0.7.0
termcolor==2.3.0
tifffile==2023.4.12
timm==0.6.7
tinycss2==1.2.1
tokenizers==0.13.3
toml @ file:///home/conda/feedstock_root/build_artifacts/toml_1604308577558/work
tomli @ file:///home/conda/feedstock_root/build_artifacts/tomli_1644342247877/work
tomlkit @ file:///home/conda/feedstock_root/build_artifacts/tomlkit_1679924068997/work
toolz==0.12.0
torch==1.13.1
torchdiffeq==0.2.3
torchmetrics==0.11.4
torchsde==0.2.5
torchvision==0.14.1
tqdm==4.65.0
trampoline==0.1.2
transformers==4.27.0
triton==2.0.0
typing-inspect==0.8.0
typing_extensions @ file:///home/conda/feedstock_root/build_artifacts/typing_extensions_1678559861143/work
tzdata==2023.3
tzlocal==4.3
uc-micro-py==1.0.1
ujson @ file:///home/conda/feedstock_root/build_artifacts/ujson_1675191915931/work
urllib3==1.26.15
uvicorn==0.21.1
virtualenv==20.21.0
virtualfish==2.5.5
wcwidth==0.2.6
websockets==11.0.2
Werkzeug==2.3.0
whatthepatch @ file:///home/conda/feedstock_root/build_artifacts/whatthepatch_1675090462655/work
wrapt @ file:///home/conda/feedstock_root/build_artifacts/wrapt_1677485519705/work
xformers==0.0.16
xlib==0.21
yacs==0.1.8
yapf==0.33.0
yarl==1.9.1
zipp==3.15.0

@FurkanGozukara
Copy link

image

how do we get this UI?

@claychinasky
Copy link

how do we get this UI?

do python run_ui.py

@FurkanGozukara
Copy link

how do we get this UI?

do python run_ui.py

I see you are adding it nice

can you also add all optimizations available in the ui as settings if possible?
i plan to make a tutorial for ui usage of deepfloyd

already made notebook tutorial

DeepFloyd IF By Stability AI - Is It Stable Diffusion XL or Version 3? We Review and Show How To Use

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants