Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] FileNotFoundError: Could not find model in TheBloke/WizardLM-*-uncensored-GPTQ #133

Closed
flexchar opened this issue Jun 3, 2023 · 25 comments
Labels
bug Something isn't working

Comments

@flexchar
Copy link

flexchar commented Jun 3, 2023

Describe the bug
Unable to load model directly from the repository using the example in README.md:

https://github.com/PanQiWei/AutoGPTQ/blob/810ed4de66e14035cafa938968633c23d57a0d79/README.md?plain=1#L166

Software version

Operating System: MacOS 13.3.1
CUDA Toolkit: None
Python: Python 3.10.11
AutoGPTQ: 0.2.1
PyTorch: 2.1.0.dev20230520
Transformers: 4.30.0.dev0
Accelerate: 0.20.0.dev0

To Reproduce
Running this script causes the error:

from transformers import AutoTokenizer, TextGenerationPipeline
from auto_gptq import AutoGPTQForCausalLM

MODEL = "TheBloke/WizardLM-7B-uncensored-GPTQ"

import logging

logging.basicConfig(
    format="%(asctime)s %(levelname)s [%(name)s] %(message)s", level=logging.INFO, datefmt="%Y-%m-%d %H:%M:%S"
)

# device = "cuda:0" 
device = "mps"

tokenizer = AutoTokenizer.from_pretrained(MODEL, use_fast=True)
# download quantized model from Hugging Face Hub and load to the first GPU
model = AutoGPTQForCausalLM.from_quantized(MODEL, 
        device=device, 
        use_safetensors=True,
        use_triton=False)

# inference with model.generate
print(tokenizer.decode(model.generate(**tokenizer("auto_gptq is", return_tensors="pt").to(model.device))[0]))

Expected behavior
I expect it to be downloaded from Hugging Face and run like specified in README.

Screenshots
If applicable, add screenshots to help explain your problem.
Error:

python scripts/auto-gptq-test.py
Downloading (…)lve/main/config.json: 100%|███████████████████████████| 552/552 [00:00<00:00, 1.08MB/s]
Downloading (…)quantize_config.json: 100%|██████████████████████████| 57.0/57.0 [00:00<00:00, 175kB/s]
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /Users/luke/dev/tg-app/scripts/auto-gptq-test.py:19 in <module>                                  │
│                                                                                                  │
│   16                                                                                             │
│   17 tokenizer = AutoTokenizer.from_pretrained(MODEL, use_fast=True)                             │
│   18 # download quantized model from Hugging Face Hub and load to the first GPU                  │
│ ❱ 19 model = AutoGPTQForCausalLM.from_quantized(MODEL,                                           │
│   20 │   │   # model_name_or_path="WizardLM-13B-Uncensored-GPTQ-4bit.act-order",                 │
│   21 │   │   device=device,                                                                      │
│   22 │   │   use_safetensors=True,                                                               │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/auto_gptq/modeling/auto.py:82 in from_quantized       │
│                                                                                                  │
│    79 │   │   model_type = check_and_get_model_type(save_dir or model_name_or_path, trust_remo   │
│    80 │   │   quant_func = GPTQ_CAUSAL_LM_MODEL_MAP[model_type].from_quantized                   │
│    81 │   │   keywords = {key: kwargs[key] for key in signature(quant_func).parameters if key    │
│ ❱  82 │   │   return quant_func(                                                                 │
│    83 │   │   │   model_name_or_path=model_name_or_path,                                         │
│    84 │   │   │   save_dir=save_dir,                                                             │
│    85 │   │   │   device_map=device_map,                                                         │
│                                                                                                  │
│ /opt/homebrew/lib/python3.10/site-packages/auto_gptq/modeling/_base.py:698 in from_quantized     │
│                                                                                                  │
│   695 │   │   │   │   │   break                                                                  │
│   696 │   │                                                                                      │
│   697 │   │   if resolved_archive_file is None: # Could not find a model file to use             │
│ ❱ 698 │   │   │   raise FileNotFoundError(f"Could not find model in {model_name_or_path}")       │
│   699 │   │                                                                                      │
│   700 │   │   model_save_name = resolved_archive_file                                            │
│   701                                                                                            │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
FileNotFoundError: Could not find model in TheBloke/WizardLM-7B-uncensored-GPTQ

Additional context

I've also tried providing model_name_or_path as noted in #91

MODEL_FILE = "WizardLM-7B-uncensored-GPTQ-4bit-128g.compat.no-act-order"
model = AutoGPTQForCausalLM.from_quantized(MODEL, 
        model_name_or_path=MODEL_FILE,
        device=device, 
        use_safetensors=True,
        use_triton=False)

But then I get the following:

python scripts/auto-gptq-test.py
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /Users/luke/dev/tg-app/scripts/auto-gptq-test.py:19 in <module>                                  │
│                                                                                                  │
│   16                                                                                             │
│   17 tokenizer = AutoTokenizer.from_pretrained(MODEL, use_fast=True)                             │
│   18 # download quantized model from Hugging Face Hub and load to the first GPU                  │
│ ❱ 19 model = AutoGPTQForCausalLM.from_quantized(MODEL,                                           │
│   20 │   │   model_name_or_path=MODEL_FILE,                                                      │
│   21 │   │   device=device,                                                                      │
│   22 │   │   use_safetensors=True,                                                               │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: AutoGPTQForCausalLM.from_quantized() got multiple values for argument 'model_name_or_path'

Perhaps @TheBloke you could chime in :)

@flexchar flexchar added the bug Something isn't working label Jun 3, 2023
@flexchar flexchar changed the title [BUG] FileNotFoundError: Could not find model in TheBloke/WizardLM-7B-uncensored-GPTQ [BUG] FileNotFoundError: Could not find model in TheBloke/WizardLM-*-uncensored-GPTQ Jun 3, 2023
@crazycoderF12
Copy link

Ahh... it's Not a Bug My Friend.
Just pass the repo id to model_name_or_path and MODEL_FILE to model_basename param.

It'll Be Solved by Now. Feel free to close this Issue after Solving.

@TheBloke
Copy link
Contributor

TheBloke commented Jun 3, 2023

Yeah you need model_basename. Most of my models (all except the recent Falcon ones, which were made with AutoGPTQ) use a custom model name. You need to tell AutoGPTQ what this is.

This can be specified with eg:

model_basename="WizardLM-7B-uncensored-GPTQ-4bit-128g.compat.no-act-order"

I was going to extend quantize_config.json to list this name so that HF Hub download could handle it automatically. But I've not had time to look at it yet, I've been so busy with models and support.

This code will work:

from transformers import AutoTokenizer, TextGenerationPipeline
from auto_gptq import AutoGPTQForCausalLM

MODEL = "TheBloke/WizardLM-7B-uncensored-GPTQ"
model_basename ="WizardLM-7B-uncensored-GPTQ-4bit-128g.compat.no-act-order"

import logging

logging.basicConfig(
    format="%(asctime)s %(levelname)s [%(name)s] %(message)s", level=logging.INFO, datefmt="%Y-%m-%d %H:%M:%S"
)

device = "cuda:0"

tokenizer = AutoTokenizer.from_pretrained(MODEL, use_fast=True)
# download quantized model from Hugging Face Hub and load to the first GPU
model = AutoGPTQForCausalLM.from_quantized(MODEL,
        model_basename=model_basename,
        device=device,
        use_safetensors=True,
        use_triton=False)

# inference with model.generate
prompt = "Tell me about AI"
prompt_template=f'''### Human: {prompt}
### Assistant:'''

input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=256, min_new_tokens=100)
print(tokenizer.decode(output[0]))

Output:

(pytorch2)  tomj@a10:/home/tomj $ python test_auto.py
Downloading (…)okenizer_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 727/727 [00:00<00:00, 6.93MB/s]
Downloading tokenizer.model: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 500k/500k [00:00<00:00, 107MB/s]
Downloading (…)/main/tokenizer.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.84M/1.84M [00:00<00:00, 7.54MB/s]
Downloading (…)in/added_tokens.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 21.0/21.0 [00:00<00:00, 259kB/s]
Downloading (…)cial_tokens_map.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 96.0/96.0 [00:00<00:00, 1.18MB/s]
Downloading (…)lve/main/config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 552/552 [00:00<00:00, 6.13MB/s]
Downloading (…)quantize_config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 57.0/57.0 [00:00<00:00, 657kB/s]
Downloading (…)ct-order.safetensors: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.89G/3.89G [00:14<00:00, 271MB/s]
2023-06-03 14:52:28 INFO [auto_gptq.modeling._base] lm_head not been quantized, will be ignored when make_quant.
2023-06-03 14:52:28 WARNING [accelerate.utils.modeling] The safetensors archive passed at /home/tomj/.cache/huggingface/hub/models--TheBloke--WizardLM-7B-uncensored-GPTQ/snapshots/cc635a081c838a1e50cbd290dd08dd561ad7edf7/WizardLM-7B-uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors does not contain metadata. Make sure to save your model with the `save_pretrained` method. Defaulting to 'pt' metadata.
2023-06-03 14:52:30 WARNING [auto_gptq.nn_modules.fused_llama_mlp] skip module injection for FusedLlamaMLPForQuantizedModel not support integrate without triton yet.
<s> ### Human: Tell me about AI
### Assistant: Sure, I'd be happy to help you with that. AI stands for Artificial Intelligence, and it refers to the development of computer systems that can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and natural language understanding.
### Human: That's interesting. So, how does AI work?
### Assistant: AI systems use algorithms and machine learning to analyze data and make predictions or decisions based on that data. They can also learn from experience and adapt to new information, which makes them increasingly effective over time.
### Human: What are some examples of AI in use today?
### Assistant: There are many examples of AI in use today, including virtual assistants like Siri or Alexa, image recognition software like Google Image Search, natural language processing software like Microsoft Bing, and autonomous vehicles like Tesla.
### Human: That's fascinating. How does AI impact our lives?
### Assistant: AI has the potential to impact our lives in many ways, from improving healthcare and education to enhancing transportation and entertainment.
(pytorch2)  tomj@a10:/home/tomj $

However

You can't use AutoGPTQ with device='mps'. Only NVidia CUDA GPUs are supported.

It may work to run on CPU only, but it will be very very slow.

@flexchar
Copy link
Author

flexchar commented Jun 3, 2023

Thank you Tom. You're doing a huge favour to the community by providing all these quantizied models. And thank you guys, that was a stupid mistake of mine missing the parameter. I should have looked at this comment from the getgo: #91 (comment)

@dbrultra
Copy link

dbrultra commented Sep 13, 2023

Not to re-open a closed issue, as it doesn't seem this is an issue/bug. But I'm getting the same error:
FileNotFoundError: Could not find model in TheBloke/WizardLM-30B-Uncensored-GPTQ

and here is how I have it defined in my script:
model_id = "TheBloke/WizardLM-30B-Uncensored-GPTQ"
model_basename = "WizardLM-30B-Uncensored-GPTQ-4bit.act-order.safetensors"

I'm assuming the basename is wrong? But I can't identify what the correct basename might be..

@TheBloke
Copy link
Contributor

TheBloke commented Sep 13, 2023

remove model_basename or set its value to model. The safetensors file is now called model.safetensors, and 'model' is now set as model_basename in quantize_config.json so you don't need to pass model_basename to .from_quantized() any more

@hzgdeerHo
Copy link

I want to finetune a gptq model with lora, the related code as following:
model_name_or_path = "TheBloke/StableBeluga2-70B-GPTQ"
model_basename ="gptq-3bit-128g-actorder_True"
tokenizer_name_or_path = "TheBloke/StableBeluga2-70B-GPTQ"
quantize_config = BaseQuantizeConfig.from_pretrained(model_name_or_path)
model = AutoGPTQForCausalLM.from_quantized(
model_name_or_path,
model_basename=model_basename,
revision=model_basename,
# revision="main",

use_safetensors=args.use_safetensors,
use_triton=args.use_triton,
device="cuda:0",
trainable=True,
inject_fused_attention=True,
inject_fused_mlp=False,
quantize_config=quantize_config

)

AND I ALWAYS GOT THE ERROR:

Could not find model in TheBloke/StableBeluga2-70B-GPTQ
File "/home/ubuntu/qlora/lora_finetune_GPTQ.py", line 50, in
model = AutoGPTQForCausalLM.from_quantized(
FileNotFoundError: Could not find model in TheBloke/StableBeluga2-70B-GPTQ

IF I CHANCED the model_name_or_path and model_basename to point to other models' ,it will work normally. AND IT WILL WORK NORMALLY IF I used the model to infer like the following codes:

# run_huggingface_login()
# 初始化tokenizer和模型
tokenizer = AutoTokenizer.from_pretrained("TheBloke/StableBeluga2-70B-GPTQ", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("TheBloke/StableBeluga2-70B-GPTQ",
                                                revision="gptq-3bit-128g-actorder_True",
                                             torch_dtype=torch.float16, 
                                             low_cpu_mem_usage=True, 
                                             device_map="auto")

@TheBloke
Copy link
Contributor

Do not pass model_basename any more. It's been unnecessary for the last few weeks.

For Transformers support, all models were renamed to model.safetensors, which means the correct value for model_basename is now "model". So you could pass model_basename = "model".

But in fact you don't need to pass it at all; the correct value is now stored in quantize_config.json

So just remove model_basename=... from your .from_quantized() call.

@hzgdeerHo
Copy link

@TheBloke THANKS FOR HELP

@hzgdeerHo
Copy link

I CHANGED THE CODE,BUT IT does not make any difference:
model = AutoGPTQForCausalLM.from_quantized(
model_name_or_path,
# model_basename=model_basename,
revision=model_basename,
发生异常: FileNotFoundError
Could not find model in TheBloke/StableBeluga2-70B-GPTQ
File "/home/ubuntu/qlora/lora_hzg_finetune_GPTQ.py", line 50, in
model = AutoGPTQForCausalLM.from_quantized(
FileNotFoundError: Could not find model in TheBloke/StableBeluga2-70B-GPTQ

@hzgdeerHo
Copy link

@TheBloke THANKS

@hzgdeerHo
Copy link

I can install auto-gptq==0.4.2 normally, BUT I CANNOT install the auto-gptq from source. Is that related to the problem?

@TheBloke
Copy link
Contributor

TheBloke commented Sep 16, 2023

revision=model_basename is wrong.

from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig

model_name_or_path = "TheBloke/StableBeluga2-70B-GPTQ"
branch = "main"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)

model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
        use_safetensors=True,
        revision=branch,
        trust_remote_code=False,
        device="cuda:0",
        quantize_config=None)

Change branch = "main" if you want to use one of the other GPTQ parameters, like gptq-4bit-32g-actorder_True.

@hzgdeerHo
Copy link

BY FOLLOW THIS 👍
#133 (comment) still got this problem:
发生异常: FileNotFoundError
Could not find model in TheBloke/StableBeluga2-70B-GPTQ
File "/home/ubuntu/qlora/lora_hzg_finetune_GPTQ.py", line 50, in
model = AutoGPTQForCausalLM.from_quantized(
FileNotFoundError: Could not find model in TheBloke/StableBeluga2-70B-GPTQ

@hzgdeerHo
Copy link

model_name_or_path = "TheBloke/StableBeluga2-70B-GPTQ"
branch = "main"
model = AutoGPTQForCausalLM.from_quantized(
model_name_or_path,
# model_basename=model_basename,
revision=branch,
# revision="main",

@hzgdeerHo
Copy link

hzgdeerHo commented Sep 16, 2023

model_name_or_path = "TheBloke/StableBeluga2-70B-GPTQ"
branch = "main"
tokenizer_name_or_path = "TheBloke/StableBeluga2-70B-GPTQ"

peft_config = GPTQLoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.1,
task_type=TaskType.CAUSAL_LM,
inference_mode=False,
)

tokenizer = AutoTokenizer.from_pretrained(tokenizer_name_or_path,
use_fast=not args.use_slow,
unk_token="",
bos_token="",
eos_token="
")
if not tokenizer.pad_token_id:
tokenizer.pad_token_id = tokenizer.eos_token_id

quantize_config = BaseQuantizeConfig.from_pretrained(model_name_or_path)
model = AutoGPTQForCausalLM.from_quantized(
model_name_or_path,
# model_basename=model_basename,
revision=branch,
# revision="main",

use_safetensors=args.use_safetensors,
use_triton=args.use_triton,
device="cuda:0",
trainable=True,
inject_fused_attention=True,
inject_fused_mlp=False,
quantize_config=quantize_config

)
model.gradient_checkpointing_enable()
model = prepare_model_for_kbit_training(model)
model = get_gptq_peft_model(model, peft_config=peft_config, auto_find_all_linears=True, train_mode=True)
model.print_trainable_parameters()

data = load_dataset("Abirate/english_quotes")

data = load_dataset('/home/ubuntu/qlora/XXXXX/output-direct-input-output-format.jsonl')
data = data.map(lambda samples: tokenizer(samples["quote"]), batched=True)
data = data['train'].train_test_split(train_size=0.9, test_size=0.1)

tokenizer.pad_token = tokenizer.eos_token
trainer = transformers.Trainer(
model=model,
train_dataset=data["train"],
eval_dataset=data['test'],
args=transformers.TrainingArguments(
per_device_train_batch_size=1,
gradient_accumulation_steps=4,
warmup_steps=2,
max_steps=3,
learning_rate=2e-2,
fp16=True,
logging_steps=1,
output_dir="outputs",
optim="paged_adamw_8bit",
evaluation_strategy='steps',
eval_steps=1
),
data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
)
model.config.use_cache = False
trainer.train()

@hzgdeerHo
Copy link

Can you help me ? THANKS
@TheBloke

@TheBloke
Copy link
Contributor

You'll need to add inject_fused_attention=False or disable_exllama=True as well.

This code works fine to load the model and run inference on it - I just tested it:

from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig

model_name_or_path = "TheBloke/StableBeluga2-70B-GPTQ"
branch = "main"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)

model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
        use_safetensors=True,
        revision=branch,
        inject_fused_attention=False,
        trust_remote_code=False,
        device="cuda:0",
        quantize_config=None)

prompt = "Tell me about AI"
prompt_template=f'''### System:
You are a helpful assistant

### User:
{prompt}

### Assistant:
'''

print("\n\n*** Generate:")

input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, do_sample=True, temperature=0.7, max_new_tokens=128)
print(tokenizer.decode(output[0]))

Output:

*** Generate:
<s> ### System:
You are a helpful assistant

### User:
Tell me about AI

### Assistant:
 AI, or artificial intelligence, is the simulation of human intelligence in machines. These machines are programmed to think and learn like humans, making them capable of performing tasks that typically require human intelligence, such as understanding natural language and recognizing patterns. AI is used in various applications, including image and speech recognition, natural language processing, robotics, and decision-making systems. It has the potential to revolutionize many industries and fields, from healthcare to finance, by automating processes, improving accuracy, and enhancing efficiency. However, the development and use of AI also raise ethical concerns, such

I have no experience of GPTQ training, so can't help with that. If you want to train a model, you could also try doing it through Transformers, rather than through AutoGPTQ. Here is a Colab notebook showing all the Transformers GPTQ methods, including PeFT training: https://colab.research.google.com/drive/1_TIrmuKOFhuRRiTWN94iLKUFu6ZX4ceb#scrollTo=td0bmYW_i_PB

@hzgdeerHo
Copy link

hzgdeerHo commented Sep 16, 2023

I can infer normally with your code ,BUT I CAN not FIGURE OUT WHY I CAN NOT FINETUNE .THANKS !

@bonuschild
Copy link

remove model_basename or set its value to model. The safetensors file is now called model.safetensors, and 'model' is now set as model_basename in quantize_config.json so you don't need to pass model_basename to .from_quantized() any more

But I still fail on Auto-GPTQ 0.4.2...would you mind looking into this mzbac/AutoGPTQ-API#6? I am so confusing.

@fxmarty
Copy link
Collaborator

fxmarty commented Nov 1, 2023

@bonuschild This is fixed on main (if you build from source) and will be included in the next release.

@bonuschild
Copy link

@bonuschild This is fixed on main (if you build from source) and will be included in the next release.

Such a good news! I will have a try and waiting for the new release. Do we have schedule on when to release next version?

@fxmarty
Copy link
Collaborator

fxmarty commented Nov 1, 2023

Tomorrow :D

@bonuschild
Copy link

Tomorrow :D

👍 awesome :)

@bp020108
Copy link

bp020108 commented Feb 8, 2024

model_name_or_path

how did you solve this?

@bp020108
Copy link

bp020108 commented Feb 9, 2024

Can you please help here. I have similar error when i use TheBloke/WizardLM-30B-Uncensored-GPTQ. I don't see error when I Use 7B model GGUF.

This is from contant.py file where I select model:

MODEL_ID = "TheBloke/WizardLM-30B-Uncensored-GPTQ"
MODEL_BASENAME = "WizardLM-30B-Uncensored-GPTQ-4bit.act-order.safetensors"

2024-02-09 22:59:28,557 - INFO - run_localGPT.py:244 - Running on: cuda
2024-02-09 22:59:28,557 - INFO - run_localGPT.py:245 - Display Source Documents set to: False
2024-02-09 22:59:28,557 - INFO - run_localGPT.py:246 - Use history set to: False
2024-02-09 22:59:28,823 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large
load INSTRUCTOR_Transformer
/home/attcloud/miniconda3/envs/GPT_NetAI_Bhavik/lib/python3.11/site-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.get(instance, owner)()
max_seq_length 512
2024-02-09 22:59:29,430 - INFO - run_localGPT.py:132 - Loaded embeddings from hkunlp/instructor-large
2024-02-09 22:59:29,501 - INFO - run_localGPT.py:60 - Loading Model: TheBloke/WizardLM-30B-Uncensored-GPTQ, on: cuda
2024-02-09 22:59:29,502 - INFO - run_localGPT.py:61 - This action can take a few minutes!
2024-02-09 22:59:29,502 - INFO - load_models.py:94 - Using AutoGPTQForCausalLM for quantized models
2024-02-09 22:59:29,734 - INFO - load_models.py:101 - Tokenizer loaded
Traceback (most recent call last):
File "/home/attcloud/miniconda3/LLAMA/localchat/run_localGPT.py", line 285, in
main()
File "/home/miniconda3/envs/GPT_NetAI_Bhavik/lib/python3.11/site-packages/click/core.py", line 1157, in call
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT_NetAI_Bhavik/lib/python3.11/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT_NetAI_Bhavik/lib/python3.11/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT_NetAI_Bhavik/lib/python3.11/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/LLAMA/localchat/run_localGPT.py", line 252, in main
qa = retrieval_qa_pipline(device_type, use_history, promptTemplate_type=model_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/LLAMA/localchat/run_localGPT.py", line 142, in retrieval_qa_pipline
llm = load_model(device_type, model_id=MODEL_ID, model_basename=MODEL_BASENAME, LOGGING=logging)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/LLAMA/localchat/run_localGPT.py", line 72, in load_model
model, tokenizer = load_quantized_model_qptq(model_id, model_basename, device_type, LOGGING)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/LLAMA/localchat/load_models.py", line 103, in load_quantized_model_qptq
model = AutoGPTQForCausalLM.from_quantized(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miniconda3/envs/GPT_NetAI_Bhavik/lib/python3.11/site-packages/auto_gptq/modeling/auto.py", line 82, in from_quantized
return quant_func(
^^^^^^^^^^^
File "/home//miniconda3/envs/GPT_NetAI_Bhavik/lib/python3.11/site-packages/auto_gptq/modeling/_base.py", line 698, in from_quantized
raise FileNotFoundError(f"Could not find model in {model_name_or_path}")
FileNotFoundError: Could not find model in TheBloke/WizardLM-30B-Uncensored-GPTQ

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

8 participants