Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

run demo.py error #35

Closed
HonestyBrave opened this issue Apr 18, 2023 · 2 comments
Closed

run demo.py error #35

HonestyBrave opened this issue Apr 18, 2023 · 2 comments

Comments

@HonestyBrave
Copy link

Hi,
thank you for your great project! but when i operator your guide, it raise error
my environment is A100, contain 4 cards
error is:

(minigpt4) shenjh@chintAI03:~/github/MiniGPT-4$ python demo.py --cfg-path eval_configs/minigpt4_eval.yaml
/home/shenjh/anaconda3/envs/minigpt4/lib/python3.9/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: libtorch_cuda_cu.so: cannot open shared object file: No such file or directory
warn(f"Failed to load image Python extension: {e}")
Initializing Chat
Loading VIT
Loading VIT Done
Loading Q-Former
Loading Q-Former Done
Loading LLAMA
Traceback (most recent call last):
File "/home/shenjh/github/MiniGPT-4/demo.py", line 57, in
model = model_cls.from_config(model_config).to('cuda:0')
File "/home/shenjh/github/MiniGPT-4/minigpt4/models/mini_gpt4.py", line 241, in from_config
model = cls(
File "/home/shenjh/github/MiniGPT-4/minigpt4/models/mini_gpt4.py", line 85, in init
self.llama_tokenizer = LlamaTokenizer.from_pretrained(llama_model, use_fast=False)
File "/home/shenjh/anaconda3/envs/minigpt4/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1770, in from_pretrained
resolved_vocab_files[file_id] = cached_file(
File "/home/shenjh/anaconda3/envs/minigpt4/lib/python3.9/site-packages/transformers/utils/hub.py", line 409, in cached_file
resolved_file = hf_hub_download(
File "/home/shenjh/anaconda3/envs/minigpt4/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 112, in _inner_fn
validate_repo_id(arg_value)
File "/home/shenjh/anaconda3/envs/minigpt4/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 160, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/path/to/vicuna/weights/'. Use repo_type argument if needed.

image

could you give some suggest?

@TsuTikgiau
Copy link
Collaborator

Thanks for your interest! It looks like you don't set the path of the Vicuna weight, as I see path placeholder '/path/to/vicuna/weights' in your error message. Please refer to "2. Prepare the pretrained Vicuna weights" in the instruction section of readme for the setting details. Thanks!

@wahab4321
Copy link

Hi dear it gives me below error can anyone guide:PS C:\Users\OSAID'$ laptop'$\Desktop\minigpt4> & "C:/Program Files/Python310/python.exe" "c:/Users/OSAID'$ laptop'$/Desktop/minigpt4/app.py"
Initializing Chat
Loading VIT
Loading VIT Done
Loading Q-Former
Loading Q-Former Done
Loading LLAMA
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ c:\Users\OSAID'$ laptop'$\Desktop\minigpt4\app.py:64 in │
│ │
│ 61 │
│ 62 model_config = cfg.model_cfg │
│ 63 model_cls = registry.get_model_class(model_config.arch) │
│ ❱ 64 model = model_cls.from_config(model_config).to('cuda:0') │
│ 65 │
│ 66 vis_processor_cfg = cfg.datasets_cfg.cc_align.vis_processor.train │
│ 67 vis_processor = registry.get_processor_class(vis_processor_cfg.name).from_config(vis_pro │
│ │
│ c:\Users\OSAID'$ laptop'$\Desktop\minigpt4\minigpt4\models\mini_gpt4.py:239 in from_config │
│ │
│ 236 │ │ max_txt_len = cfg.get("max_txt_len", 32) │
│ 237 │ │ end_sym = cfg.get("end_sym", '\n') │
│ 238 │ │ │
│ ❱ 239 │ │ model = cls( │
│ 240 │ │ │ vit_model=vit_model, │
│ 241 │ │ │ q_former_model=q_former_model, │
│ 242 │ │ │ img_size=img_size, │
│ │
│ c:\Users\OSAID'$ laptop'$\Desktop\minigpt4\minigpt4\models\mini_gpt4.py:90 in init
│ │
│ 87 │ │ print('Loading Q-Former Done') │
│ 88 │ │ │
│ 89 │ │ print('Loading LLAMA') │
│ ❱ 90 │ │ self.llama_tokenizer = LlamaTokenizer.from_pretrained('Vision-CAIR/vicuna', use_ │
│ 91 │ │ self.llama_tokenizer.pad_token = self.llama_tokenizer.eos_token │
│ 92 │ │ │
│ 93 │ │ if llama_cache_dir: │
│ │
│ C:\Program Files\Python310\lib\os.py:679 in getitem
│ │
│ 676 │ │ │ value = self._data[self.encodekey(key)] │
│ 677 │ │ except KeyError: │
│ 678 │ │ │ # raise KeyError with the original key value │
│ ❱ 679 │ │ │ raise KeyError(key) from None │
│ 680 │ │ return self.decodevalue(value) │
│ 681 │ │
│ 682 │ def setitem(self, key, value): │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
KeyError: 'API_TOKEN'
PS C:\Users\OSAID'$ laptop'$\Desktop\minigpt4>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants