You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have read the README and searched the existing issues.
Reproduction
Traceback (most recent call last):
File "/hy-tmp/llama-factory-1/LLaMA-Factory-main/src/api_demo.py", line 16, in <module>
main()
File "/hy-tmp/llama-factory-1/LLaMA-Factory-main/src/api_demo.py", line 9, in main
chat_model = ChatModel()
File "/hy-tmp/llama-factory-1/LLaMA-Factory-main/src/llmtuner/chat/chat_model.py", line 21, in __init__
model_args, data_args, finetuning_args, generating_args = get_infer_args(args)
File "/hy-tmp/llama-factory-1/LLaMA-Factory-main/src/llmtuner/hparams/parser.py", line 265, in get_infer_args
raise ValueError("vLLM engine does not support LoRA adapters. Merge them first.")
在参数解析的时候直接把lora给挡住了
Expected behavior
No response
System Info
No response
Others
No response
The text was updated successfully, but these errors were encountered:
Reminder
Reproduction
在参数解析的时候直接把lora给挡住了
Expected behavior
No response
System Info
No response
Others
No response
The text was updated successfully, but these errors were encountered: