-
Notifications
You must be signed in to change notification settings - Fork 7
Description
Hi,
There is an error when I run quickstart.py, which as following:
File "/root/.cache/huggingface/modules/transformers_modules/model/modeling_chartmoe.py", line 122, in encode_img
img_embeds, atts_img, img_target = self.img2emb(image)
File "/root/.cache/huggingface/modules/transformers_modules/model/modeling_chartmoe.py", line 126, in img2emb
img_embeds = self.vision_proj(self.vit(image.to(self.device)))
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/root/.cache/huggingface/modules/transformers_modules/model/build_mlp.py", line 133, in forward
image_forward_outs = self.vision_tower(
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 1171, in forward
return self.vision_model(
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 1094, in forward
hidden_states = self.embeddings(pixel_values, interpolate_pos_encoding=interpolate_pos_encoding)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 244, in forward
raise ValueError(
ValueError: Input image size (490490) doesn't match model (336336).
It seems like that the img_size is 490(from config.json) but the size of clip is 336(clip-vit-large-patch14-336)。Where did I operate incorrectly? Thank you so much!