Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when specifying adapter_id with invalid base_model_name_or_path #311

Closed
sleepwalker2017 opened this issue Mar 7, 2024 · 3 comments · Fixed by #317
Closed

Error when specifying adapter_id with invalid base_model_name_or_path #311

sleepwalker2017 opened this issue Mar 7, 2024 · 3 comments · Fixed by #317
Labels
bug Something isn't working

Comments

@sleepwalker2017
Copy link

sleepwalker2017 commented Mar 7, 2024

I created a container using the docker image.

I use this to launch server:

lorax-launcher --model-id /data/vicuna-13b/vicuna-13b-v1.5/ --sharded true --num-shard 2

And then I send this:

input_dict = {
        "inputs": test_data[idx],
        "parameters": {
            "adapter_id": "merror/llama_13b_lora_beauty",
            "max_new_tokens": 256,
            "top_p": 0.7
        }
    }

Here is the error log

requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/decapoda-research/llama-13b-hf/resolve/main/config.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/conda/lib/python3.10/site-packages/transformers/utils/hub.py", line 398, in cached_file
    resolved_file = hf_hub_download(
  File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
    return fn(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1374, in hf_hub_download
    raise head_call_error
  File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1247, in hf_hub_download
    metadata = get_hf_file_metadata(
  File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
    return fn(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1624, in get_hf_file_metadata
    r = _request_wrapper(
  File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 402, in _request_wrapper
    response = _request_wrapper(
  File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 426, in _request_wrapper
    hf_raise_for_status(response)
  File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 320, in hf_raise_for_status
    raise RepositoryNotFoundError(message, response) from e
huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-65e97299-2caafa91151782b16bfe4f93;8aa43704-fa2c-4ccc-8536-033fea7a044c)

Repository Not Found for url: https://huggingface.co/decapoda-research/llama-13b-hf/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.

Why does it merge the adapter with the base model to generate a new model?

@thincal
Copy link
Contributor

thincal commented Mar 7, 2024

input_dict = {
"inputs": test_data[idx],
"parameters": {
"adapter_id": "merror/llama_13b_lora_beauty",
"max_new_tokens": 256,
"top_p": 0.7
}
}

if the adapter_id is from local filesystem, "adapter_source": "local" is required also.

@sleepwalker2017
Copy link
Author

input_dict = {
"inputs": test_data[idx],
"parameters": {
"adapter_id": "merror/llama_13b_lora_beauty",
"max_new_tokens": 256,
"top_p": 0.7
}
}

if the adapter_id is from local filesystem, "adapter_source": "local" is required also.

hi, the adapter is from hub.
I wonder why it looks for this repo: llama-13b-hf.

@tgaddair tgaddair added the bug Something isn't working label Mar 10, 2024
@tgaddair tgaddair changed the title Error when specifying adapter_id Error when specifying adapter_id with invalid base_model_name_or_path Mar 10, 2024
@tgaddair
Copy link
Contributor

Hey @sleepwalker2017, I think I see the issue here. Looking at the adapter_confg.json for this adapter, it states the base_mdel_name_or_path as decapoda-research/llama-13b-hf. Because your base model is /data/vicuna-13b/vicuna-13b-v1.5 (different from decapoda-research/llama-13b-hf), LoRAX will attempt to do an additional check here to see if the architectures of the adapter and base model are compatible.

In this case, the architecture compatibility check fails because decapoda-research/llama-13b-hf no longer exists (or was made private).

I think the architecture check should not cause a hard failure if it can't be performed. I'll put together a PR to make this check non-fatal in this case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
3 participants