Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

encoded_inputs["attention_mask"] = encoded_inputs["attention_mask"] + [0] * difference OverflowError: cannot fit 'int' into an index-sized integer #18

Open
mashoutsider opened this issue Feb 19, 2024 · 2 comments

Comments

@mashoutsider
Copy link

Trying to run with 8GB VRAM

All models appear to load as expected and code runs up until the time the image is passed into the pipeline (ie right up to the inference point)

to avoid OOM issues have set:

--vae_decoder_tiled_size=64
--vae_encoder_tiled_size=512
--latent_tiled_size=40
--latent_tiled_overlap=2

Issue seems to be at the tokenizer:

Traceback (most recent call last):
File "/home/outsider/Desktop/coding/SeeSR/test_seesr.py", line 284, in
main(args)
File "/home/outsider/Desktop/coding/SeeSR/test_seesr.py", line 233, in main
image = pipeline(
File "/home/outsider/Desktop/coding/SeeSR/utils/vaehook.py", line 440, in wrapper
ret = fn(*args, **kwargs)
File "/home/outsider/anaconda3/envs/sd2/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/outsider/Desktop/coding/SeeSR/pipelines/pipeline_seesr.py", line 944, in call
prompt_embeds, ram_encoder_hidden_states = self._encode_prompt(
File "/home/outsider/Desktop/coding/SeeSR/pipelines/pipeline_seesr.py", line 356, in _encode_prompt
text_inputs = self.tokenizer(
File "/home/outsider/anaconda3/envs/sd2/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2561, in call
encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
File "/home/outsider/anaconda3/envs/sd2/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2667, in _call_one
return self.encode_plus(
File "/home/outsider/anaconda3/envs/sd2/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2740, in encode_plus
return self._encode_plus(
File "/home/outsider/anaconda3/envs/sd2/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 652, in _encode_plus
return self.prepare_for_model(
File "/home/outsider/anaconda3/envs/sd2/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3219, in prepare_for_model
encoded_inputs = self.pad(
File "/home/outsider/anaconda3/envs/sd2/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3024, in pad
encoded_inputs = self._pad(
File "/home/outsider/anaconda3/envs/sd2/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3409, in _pad
encoded_inputs["attention_mask"] = encoded_inputs["attention_mask"] + [0] * difference
OverflowError: cannot fit 'int' into an index-sized integer

@Assioncreed
Copy link

I also had thie issue during training img2mg-turbo.Have you solved it?

@chengduxiaowu
Copy link

me too

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants