Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue using generate.py #52

Closed
mikiane opened this issue Mar 29, 2023 · 3 comments
Closed

Issue using generate.py #52

mikiane opened this issue Mar 29, 2023 · 3 comments
Assignees

Comments

@mikiane
Copy link

mikiane commented Mar 29, 2023

Hi, I am trying to build a Flask API version of generate.py. Before that, I tried to run it on my server and encountered an error, which is probably linked to the default YAML file distributed.

model/tokenizer

model_name: # REPLACE HERE with the base llama model
tokenizer_name: # REPLACE HERE with the llama tokenizer
lora: true
lora_path: "/nomic-ai/gpt4all-lora"

max_new_tokens: 512
temperature: 0
prompt: null

The script generates this error :

╭─────────────────────────── Traceback (most recent call last) ────────────────────────────╮
│ /Users/michel/micromamba/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py:23 │
│ 9 in hf_raise_for_status │
│ │
│ 236 │ │
│ 237 │ """ │
│ 238 │ try: │
│ ❱ 239 │ │ response.raise_for_status() │
│ 240 │ except HTTPError as e: │
│ 241 │ │ error_code = response.headers.get("X-Error-Code") │
│ 242 │
│ │
│ /Users/michel/micromamba/lib/python3.9/site-packages/requests/models.py:1021 in │
│ raise_for_status │
│ │
│ 1018 │ │ │ ) │
│ 1019 │ │ │
│ 1020 │ │ if http_error_msg: │
│ ❱ 1021 │ │ │ raise HTTPError(http_error_msg, response=self) │
│ 1022 │ │
│ 1023 │ def close(self): │
│ 1024 │ │ """Releases the connection back to the pool. Once this method has been │
╰──────────────────────────────────────────────────────────────────────────────────────────╯
HTTPError: 404 Client Error: Not Found for url:
https://huggingface.co/gpt4all-lora/resolve/main/config.json

Can you please help me?

By the way, gpt4all-lora-quantized.bin is perfectly working using ./gpt4all-lora-quantized-OSX-m1

@bstadt
Copy link
Contributor

bstadt commented Mar 29, 2023

related to #18
we are making an easy gpu generation wrapper right now, stay tuned

@bstadt bstadt self-assigned this Mar 29, 2023
@rgstephens
Copy link

@AndriyMulyar Where was this completed? Looking for references.

@AndriyMulyar
Copy link
Contributor

See the readme in gpt4all-training.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants