You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am trying to build a Flask API version of generate.py. Before that, I tried to run it on my server and encountered an error, which is probably linked to the default YAML file distributed.
model/tokenizer
model_name: # REPLACE HERE with the base llama model
tokenizer_name: # REPLACE HERE with the llama tokenizer
lora: true
lora_path: "/nomic-ai/gpt4all-lora"
max_new_tokens: 512
temperature: 0
prompt: null
The script generates this error :
╭─────────────────────────── Traceback (most recent call last) ────────────────────────────╮
│ /Users/michel/micromamba/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py:23 │
│ 9 in hf_raise_for_status │
│ │
│ 236 │ │
│ 237 │ """ │
│ 238 │ try: │
│ ❱ 239 │ │ response.raise_for_status() │
│ 240 │ except HTTPError as e: │
│ 241 │ │ error_code = response.headers.get("X-Error-Code") │
│ 242 │
│ │
│ /Users/michel/micromamba/lib/python3.9/site-packages/requests/models.py:1021 in │
│ raise_for_status │
│ │
│ 1018 │ │ │ ) │
│ 1019 │ │ │
│ 1020 │ │ if http_error_msg: │
│ ❱ 1021 │ │ │ raise HTTPError(http_error_msg, response=self) │
│ 1022 │ │
│ 1023 │ def close(self): │
│ 1024 │ │ """Releases the connection back to the pool. Once this method has been │
╰──────────────────────────────────────────────────────────────────────────────────────────╯
HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/gpt4all-lora/resolve/main/config.json
Can you please help me?
By the way, gpt4all-lora-quantized.bin is perfectly working using ./gpt4all-lora-quantized-OSX-m1
The text was updated successfully, but these errors were encountered:
Hi, I am trying to build a Flask API version of generate.py. Before that, I tried to run it on my server and encountered an error, which is probably linked to the default YAML file distributed.
model/tokenizer
model_name: # REPLACE HERE with the base llama model
tokenizer_name: # REPLACE HERE with the llama tokenizer
lora: true
lora_path: "/nomic-ai/gpt4all-lora"
max_new_tokens: 512
temperature: 0
prompt: null
The script generates this error :
╭─────────────────────────── Traceback (most recent call last) ────────────────────────────╮
│ /Users/michel/micromamba/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py:23 │
│ 9 in hf_raise_for_status │
│ │
│ 236 │ │
│ 237 │ """ │
│ 238 │ try: │
│ ❱ 239 │ │ response.raise_for_status() │
│ 240 │ except HTTPError as e: │
│ 241 │ │ error_code = response.headers.get("X-Error-Code") │
│ 242 │
│ │
│ /Users/michel/micromamba/lib/python3.9/site-packages/requests/models.py:1021 in │
│ raise_for_status │
│ │
│ 1018 │ │ │ ) │
│ 1019 │ │ │
│ 1020 │ │ if http_error_msg: │
│ ❱ 1021 │ │ │ raise HTTPError(http_error_msg, response=self) │
│ 1022 │ │
│ 1023 │ def close(self): │
│ 1024 │ │ """Releases the connection back to the pool. Once this method has been │
╰──────────────────────────────────────────────────────────────────────────────────────────╯
HTTPError: 404 Client Error: Not Found for url:
https://huggingface.co/gpt4all-lora/resolve/main/config.json
Can you please help me?
By the way, gpt4all-lora-quantized.bin is perfectly working using ./gpt4all-lora-quantized-OSX-m1
The text was updated successfully, but these errors were encountered: