Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error related to Transformers version when launching #138

Open
tokenwizard opened this issue Oct 3, 2023 · 1 comment
Open

Error related to Transformers version when launching #138

tokenwizard opened this issue Oct 3, 2023 · 1 comment

Comments

@tokenwizard
Copy link

tokenwizard commented Oct 3, 2023

On first run, I am getting this error during startup, after everything has downloaded:
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'

A mixture of fp16 and non-fp16 filenames will be loaded.
Loaded fp16 filenames:
[unet/diffusion_pytorch_model.fp16.safetensors, safety_checker/model.fp16.safetensors, text_encoder/model.fp16-00001-of-00002.safetensors, text_encoder/model.fp16-00002-of-00002.safetensors]
Loaded non-fp16 filenames:
[watermarker/diffusion_pytorch_model.safetensors
If this behavior is not expected, please check your folder structure.
Traceback (most recent call last):
  File "/home/tokenwizard/DeepFloydIF/./startup.py", line 6, in <module>
    stage_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16)
  File "/home/tokenwizard/DeepFloydIF/venv/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 1039, in from_pretrained
    loaded_sub_model = load_sub_model(
  File "/home/tokenwizard/DeepFloydIF/venv/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 431, in load_sub_model
    raise ImportError(
ImportError: When passing `variant='fp16'`, please make sure to upgrade your `transformers` version to at least 4.27.0.dev0

It seems to indicate that it wants transformers 4.27.0 installed when the installation includes version 4.25.1.

I installed version 4.27.0 but then get the following warning from pip:
image

Now when running I wget this additional package version warning:

A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'

A mixture of fp16 and non-fp16 filenames will be loaded.
Loaded fp16 filenames:
[text_encoder/model.fp16-00002-of-00002.safetensors, unet/diffusion_pytorch_model.fp16.safetensors, text_encoder/model.fp16-00001-of-00002.safetensors, safety_checker/model.fp16.safetensors]
Loaded non-fp16 filenames:
[watermarker/diffusion_pytorch_model.safetensors
If this behavior is not expected, please check your folder structure.
The config attributes {'lambda_min_clipped': -5.1} were passed to DDPMScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
The config attributes {'encoder_hid_dim_type': 'text_proj'} were passed to UNet2DConditionModel, but are not expected and will be ignored. Please verify your config.json configuration file.
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 20.11it/s]
Traceback (most recent call last):
  File "/home/tokenwizard/DeepFloydIF/./startup.py", line 7, in <module>
    stage_1.enable_model_cpu_offload()
  File "/home/tokenwizard/DeepFloydIF/venv/lib/python3.10/site-packages/diffusers/pipelines/deepfloyd_if/pipeline_if.py", line 180, in enable_model_cpu_offload
    raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
ImportError: `enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.

After installing version 0.17.0 of accelerate, it is now proceeding further with the run and downloading more data/models. I will report back if it actually runs successfully. But it seems odd the required version of these packages would not be installed by default during the setup process. FYI I am running Arch Linux and using python 3.10 in a venv.

@tokenwizard
Copy link
Author

Ok, after manually installing those two version of the those packages I have it running now.
Unfortunately, during stage III generation, I am running out of memory with my 16GB card. Time, to see where I can optimize.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant