Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pretrain errors #14

Open
linchen111 opened this issue Mar 26, 2024 · 4 comments
Open

pretrain errors #14

linchen111 opened this issue Mar 26, 2024 · 4 comments

Comments

@linchen111
Copy link

When I run pretrain scripts,
I got this:
File "/data/lc/Multi-image/multi_token/multi_token/language_models/mistral.py", line 85, in forward
) = self.prepare_inputs_labels_for_multimodal(
File "/data/lc/Multi-image/multi_token/multi_token/language_models/base_model.py", line 69, in prepare_inputs_labels_for_multimodal
m_vals = m.forward(kwargs.get(m.name))
File "/root/miniconda3/envs/multi-image/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/data/lc/Multi-image/multi_token/multi_token/modalities/vision_clip.py", line 177, in forward
image_features.append(self.module.forward(image_batch))
File "/root/miniconda3/envs/multi-image/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/data/lc/Multi-image/multi_token/multi_token/modalities/vision_clip.py", line 40, in forward
image_forward_outs = self.image_model(
File "/root/miniconda3/envs/multi-image/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/miniconda3/envs/multi-image/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/root/miniconda3/envs/multi-image/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 925, in forward
return self.vision_model(
File "/root/miniconda3/envs/multi-image/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/miniconda3/envs/multi-image/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/root/miniconda3/envs/multi-image/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 849, in forward
hidden_states = self.embeddings(pixel_values)
File "/root/miniconda3/envs/multi-image/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/miniconda3/envs/multi-image/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(args, **kwargs)
File "/root/miniconda3/envs/multi-image/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 190, in forward
patch_embeds = self.patch_embedding(pixel_values.to(dtype=target_dtype)) # shape = [
, width, grid, grid]
File "/root/miniconda3/envs/multi-image/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/miniconda3/envs/multi-image/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/root/miniconda3/envs/multi-image/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 460, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/root/miniconda3/envs/multi-image/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: weight should have at least three dimension

@sshh12
Copy link
Owner

sshh12 commented Mar 27, 2024

What's the command and dataset you are using?

@linchen111
Copy link
Author

What's the command and dataset you are using?

CUDA_VISIBLE_DEVICES=0 deepspeed scripts/train_model.py
--model_name_or_path /data/lc/Multi-image/Model/Mistral-7B-Instruct-v0.1
--model_cls MistralLMMForCausalLM
--modality_builder vision_clip
--dataset_path /data/lc/Multi-image/data/llava_pretrain_data/ok_data \ (this one I use prepare-data.py)
--output_dir /data/lc/Multi-image/Model/my_lmm_pretrain
--pretrain_projectors
--lora_enable True
--num_train_epochs 1
--gradient_checkpointing True
--per_device_train_batch_size 1
--per_device_eval_batch_size 1
--gradient_accumulation_steps 32
--model_max_length 1024
--evaluation_strategy "no"
--save_strategy "steps"
--save_steps 2048
--save_total_limit 1
--learning_rate 1e-5
--weight_decay 0.
--warmup_ratio 0.03
--lr_scheduler_type "cosine"
--dataloader_num_workers 2
--logging_steps 1
--deepspeed ./configs/zero3_offload.json

@spydaz
Copy link

spydaz commented Mar 28, 2024

please create a jupiter notebook for training on google colab please. I was able to merge the multi lora into my mistral model.
so essentially disregarding the lora but how to train the model to enforce the data through the new merged tensor model ... i still have the components separate so i can reload from base and lora ...
to merge iwth my own model i ha to clone the lora change the basemodel value (mine is a mistral so it is the same tensor array. it merged unsloth with no issues .
I just need a jupiter note book to train on unsloth , with the dataset on hf... for the multi, input.
i found that when inputting a image on lmstudio it sent 5 or 6 inputs , i guess it was looking for either input , image, doc, sound, video ?
i tried it on kobolo it still worked fine as a llm , and when i went to enter the image in the background it was using llava to encode the image , but the context was to small ... so i think a image of only 512,512 might work... ?

its only the traiing methods

@sshh12
Copy link
Owner

sshh12 commented Apr 2, 2024

@linchen111 im curious if you are able to run the CLIP demo code with one of your example images

https://huggingface.co/docs/transformers/model_doc/clip

from PIL import Image
import requests

from transformers import CLIPProcessor, CLIPModel

model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)

inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)

outputs = model(**inputs)
logits_per_image = outputs.logits_per_image  # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1)  # we can take the softmax to get the label probabilities

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants