You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
#!/bin/bash export CUDA_HOME=/usr/local/cuda-11.7/ export LIBRARY_PATH=${CUDA_HOME}/lib64 export C_INCLUDE_PATH=${CUDA_HOME}/include python inference_lora.py \ --prompt "Close-up photo of the happy smiles on the faces of the cool man and beautiful woman as they leave the island with the treasure, sail back to the vacation beach, and begin their love story, 35mm photograph, film, professional, 4k, highly detailed." \ --negative_prompt 'noisy, blurry, soft, deformed, ugly' ` --prompt_rewrite '[Close-up photo of the Chris Evans in surprised expressions as he wear Hogwarts uniform, 35mm photograph, film, professional, 4k, highly detailed.]-*-[noisy, blurry, soft, deformed, ugly]|[Close-up photo of the TaylorSwift in surprised expressions as she wear Hogwarts uniform, 35mm photograph, film, professional, 4k, highly detailed.]-*-[noisy, blurry, soft, deformed, ugly]' \ --pretrained_sdxl_model /mnt/e/checkpoints/stable_diffusion_xl \ --lora_path '/mnt/e/lora/chris-evans.safetensors|/mnt/e/lora/TaylorSwiftSDXL.safetensors' \ --controlnet_checkpoint /mnt/e/checkpoints/controlnet_sx \
`--spatial_condition './example/pose.png'``
Below the Tracetrack.
Traceback (most recent call last): File "/home/barraland/OMG/inference_lora.py", line 262, in <module> image = sample_image( File "/home/barraland/OMG/inference_lora.py", line 57, in sample_image images = pipe( File "/home/barraland/miniconda3/envs/OMG/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/barraland/OMG/src/pipelines/lora_pipeline.py", line 320, in __call__ ) = self.encode_prompt( File "/home/barraland/miniconda3/envs/OMG/lib/python3.10/site-packages/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl.py", line 361, in encode_prompt text_inputs = tokenizer( File "/home/barraland/miniconda3/envs/OMG/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2802, in __call__ encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs) File "/home/barraland/miniconda3/envs/OMG/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2888, in _call_one return self.batch_encode_plus( File "/home/barraland/miniconda3/envs/OMG/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3079, in batch_encode_plus return self._batch_encode_plus( File "/home/barraland/miniconda3/envs/OMG/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 807, in _batch_encode_plus batch_outputs = self._batch_prepare_for_model( File "/home/barraland/miniconda3/envs/OMG/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 879, in _batch_prepare_for_model batch_outputs = self.pad( File "/home/barraland/miniconda3/envs/OMG/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3286, in pad outputs = self._pad( File "/home/barraland/miniconda3/envs/OMG/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3650, in _pad encoded_inputs["attention_mask"] = encoded_inputs["attention_mask"] + [0] * difference OverflowError: cannot fit 'int' into an index-sized integer
I am running CUDA 11.7 (CUDA 11.3 generated a previous error). I also tried upgrading transformers with no effect. Also, I have an RTX 4080 16 GB, which is fully saturated when the error hits.
Thanks a lot
The text was updated successfully, but these errors were encountered:
OMG requires approximately 36GB of memory for inference, which may exceed the available memory on an RTX 4080 with 16GB. It's possible that this project may not be able to run on RTX 4080.
Hello again. I am executing this script.
#!/bin/bash
export CUDA_HOME=/usr/local/cuda-11.7/
export LIBRARY_PATH=${CUDA_HOME}/lib64
export C_INCLUDE_PATH=${CUDA_HOME}/include
python inference_lora.py \
--prompt "Close-up photo of the happy smiles on the faces of the cool man and beautiful woman as they leave the island with the treasure, sail back to the vacation beach, and begin their love story, 35mm photograph, film, professional, 4k, highly detailed." \
--negative_prompt 'noisy, blurry, soft, deformed, ugly' `--prompt_rewrite '[Close-up photo of the Chris Evans in surprised expressions as he wear Hogwarts uniform, 35mm photograph, film, professional, 4k, highly detailed.]-*-[noisy, blurry, soft, deformed, ugly]|[Close-up photo of the TaylorSwift in surprised expressions as she wear Hogwarts uniform, 35mm photograph, film, professional, 4k, highly detailed.]-*-[noisy, blurry, soft, deformed, ugly]' \
--pretrained_sdxl_model /mnt/e/checkpoints/stable_diffusion_xl \
--lora_path '/mnt/e/lora/chris-evans.safetensors|/mnt/e/lora/TaylorSwiftSDXL.safetensors' \
--controlnet_checkpoint /mnt/e/checkpoints/controlnet_sx \
`--spatial_condition './example/pose.png'``
Below the Tracetrack.
Traceback (most recent call last):
File "/home/barraland/OMG/inference_lora.py", line 262, in <module> image = sample_image(
File "/home/barraland/OMG/inference_lora.py", line 57, in sample_image images = pipe(
File "/home/barraland/miniconda3/envs/OMG/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs)
File "/home/barraland/OMG/src/pipelines/lora_pipeline.py", line 320, in __call__ ) = self.encode_prompt(
File "/home/barraland/miniconda3/envs/OMG/lib/python3.10/site-packages/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl.py", line 361, in encode_prompt text_inputs = tokenizer(
File "/home/barraland/miniconda3/envs/OMG/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2802, in __call__ encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
File "/home/barraland/miniconda3/envs/OMG/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2888, in _call_one return self.batch_encode_plus(
File "/home/barraland/miniconda3/envs/OMG/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3079, in batch_encode_plus return self._batch_encode_plus(
File "/home/barraland/miniconda3/envs/OMG/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 807, in _batch_encode_plus batch_outputs = self._batch_prepare_for_model(
File "/home/barraland/miniconda3/envs/OMG/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 879, in _batch_prepare_for_model batch_outputs = self.pad(
File "/home/barraland/miniconda3/envs/OMG/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3286, in pad outputs = self._pad(
File "/home/barraland/miniconda3/envs/OMG/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3650, in _pad encoded_inputs["attention_mask"] = encoded_inputs["attention_mask"] + [0] * difference
OverflowError: cannot fit 'int' into an index-sized integer
I am running CUDA 11.7 (CUDA 11.3 generated a previous error). I also tried upgrading transformers with no effect. Also, I have an RTX 4080 16 GB, which is fully saturated when the error hits.
Thanks a lot
The text was updated successfully, but these errors were encountered: