Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: Can't find 'adapter_config.json' #1334

Closed
2 of 4 tasks
lyc728 opened this issue Dec 12, 2023 · 18 comments
Closed
2 of 4 tasks

ValueError: Can't find 'adapter_config.json' #1334

lyc728 opened this issue Dec 12, 2023 · 18 comments

Comments

@lyc728
Copy link

lyc728 commented Dec 12, 2023

System Info

Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/peft/utils/config.py", line 117, in from_pretrained
config_file = hf_hub_download(
File "/usr/local/lib/python3.9/dist-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn
validate_repo_id(arg_value)
File "/usr/local/lib/python3.9/dist-packages/huggingface_hub/utils/_validators.py", line 158, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/data/liuyuanchao/text-generation-inference-main/tiiuae'. Use repo_type argument if needed.

During handling of the above exception, another exception occurred:

Information

  • Docker
  • The CLI directly

Tasks

  • An officially supported command
  • My own modifications

Reproduction

File "/usr/local/lib/python3.9/dist-packages/peft/utils/config.py", line 117, in from_pretrained
config_file = hf_hub_download(
File "/usr/local/lib/python3.9/dist-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn
validate_repo_id(arg_value)
File "/usr/local/lib/python3.9/dist-packages/huggingface_hub/utils/_validators.py", line 158, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/data/liuyuanchao/text-generation-inference-main/tiiuae'. Use repo_type argument if needed.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/data/text-generation-inference-main/1.py", line 9, in
model = AutoPeftModelForCausalLM.from_pretrained(
File "/usr/local/lib/python3.9/dist-packages/peft/auto.py", line 69, in from_pretrained
peft_config = PeftConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/peft/utils/config.py", line 121, in from_pretrained
raise ValueError(f"Can't find '{CONFIG_NAME}' at '{pretrained_model_name_or_path}'")
ValueError: Can't find 'adapter_config.json' at '/data/text-generation-inference-main/tiiuae'

Expected behavior

File "/usr/local/lib/python3.9/dist-packages/peft/utils/config.py", line 117, in from_pretrained
config_file = hf_hub_download(
File "/usr/local/lib/python3.9/dist-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn
validate_repo_id(arg_value)
File "/usr/local/lib/python3.9/dist-packages/huggingface_hub/utils/_validators.py", line 158, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/data/liuyuanchao/text-generation-inference-main/tiiuae'. Use repo_type argument if needed.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/data/text-generation-inference-main/1.py", line 9, in
model = AutoPeftModelForCausalLM.from_pretrained(
File "/usr/local/lib/python3.9/dist-packages/peft/auto.py", line 69, in from_pretrained
peft_config = PeftConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/peft/utils/config.py", line 121, in from_pretrained
raise ValueError(f"Can't find '{CONFIG_NAME}' at '{pretrained_model_name_or_path}'")
ValueError: Can't find 'adapter_config.json' at '/data/text-generation-inference-main/tiiuae'

@lyc728
Copy link
Author

lyc728 commented Dec 12, 2023

I used a local model loading error

@Vitaliy-Firebird
Copy link

Experiencing the exact same issue for the past couple of days. I keep digging

@Vitaliy-Firebird
Copy link

Here’s a link to the older issue that describes the same problem #1283

@lyc728
Copy link
Author

lyc728 commented Dec 12, 2023

Here’s a link to the older issue that describes the same problem #1283

Did you solve the problem ? I can not find #1283

@Vitaliy-Firebird
Copy link

Here’s a link to the older issue that describes the same problem #1283

Did you solve the problem ? I can not find #1283

Nope, still trying to figure it out

@kmuk1234
Copy link

I get the same error.
Did you solve the problem ?

@lyc728
Copy link
Author

lyc728 commented Dec 13, 2023

I get the same error. Did you solve the problem ?

No, this is a big trouble

@OlivierDehaene
Copy link
Member

#1347 should fix this.

@oOraph
Copy link
Contributor

oOraph commented Dec 15, 2023

#1347 should fix this.

Hum nope it won't. In short here is what triggers the issue

the launcher execs

text-generation-server download-weights $MODEL_ID_OR_DIR --extension '.safetensors' --logger-level INFO --json-output

if MODEL_ID_OR_DIR is a local directory, the text-generation-server program will first look for local weights with the specified extensions at the root of the local directory as one can see here

if Path(model_id).exists() and Path(model_id).is_dir():

called here
https://github.com/huggingface/text-generation-inference/blob/main/server/text_generation_server/cli.py#L128

if no files are found, many fallbacks in exceptions handling lead to the final error obtained that we should disregard here.

so for the call with a local dir to work, two things:

  1. weights are at the root of the specified directory
  2. safetensor weights are required

For 1. if one prefetched the model from the hub then one should not specify the model root directory but the specific revsion root directly
example:
let assume we've just prefetched gpt2 from the hub then

text-generation-launcher --model-id /data/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/

will work, but not

text-generation-launcher --model-id /data/models--gpt2/

because in the hub file layout there is indeed no weight file at the root of the layout

I don't know if we should change some behaviour in the code/fix anything, just explaining what I see happening here

@AlexBlack2202
Copy link

i have same problem, has any one fix this?

@AlexBlack2202
Copy link

i know the answer, we must convert model from bin to safetensor format

@qijiaxing
Copy link

same issue

@philschmid
Copy link
Member

Should be fixed in #1364. Can you please try?

@likejazz
Copy link

Same here. #1364 didn't works.

@likejazz
Copy link

I solved this issue after converting model from bin to safetensors using convert.py
https://github.com/huggingface/safetensors/blob/main/bindings/python/convert.py

@ZeroYuJie
Copy link

v1.1.1 works fine, but the latest v1.3.4 is giving the same error.

@OlivierDehaene
Copy link
Member

Fixed by #1419

@sam-h-bean
Copy link

sam-h-bean commented Jan 17, 2024

@OlivierDehaene I am seeing this issue using .safetensors not .bin do you expect your change to fix my issue as well?

Also would it be possible to get these local loading changes released?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests