We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't run llava next
export server_port=30002 export CUDA_VISIBLE_DEVICES="2,3" python -m sglang.launch_server --model-path lmms-lab/llava-next-110b --tokenizer-path lmms-lab/llavanext-qwen-tokenizer --port=$server_port --host="0.0.0.0" --tp-size=2 --random-seed=1234 --context-length=32768
Traceback (most recent call last): File "/home/ubuntu/miniconda3/envs/sglang/lib/python3.10/site-packages/rpyc/core/protocol.py", line 369, in _dispatch_request res = self._HANDLERS[handler](self, *args) File "/home/ubuntu/miniconda3/envs/sglang/lib/python3.10/site-packages/rpyc/core/protocol.py", line 863, in _handle_call return obj(*args, **dict(kwargs)) File "/home/ubuntu/miniconda3/envs/sglang/lib/python3.10/site-packages/sglang/srt/managers/router/model_rpc.py", line 76, in __init__ self.model_runner = ModelRunner( File "/home/ubuntu/miniconda3/envs/sglang/lib/python3.10/site-packages/sglang/srt/managers/router/model_runner.py", line 285, in __init__ self.load_model() File "/home/ubuntu/miniconda3/envs/sglang/lib/python3.10/site-packages/sglang/srt/managers/router/model_runner.py", line 294, in load_model model_class = get_model_cls_by_arch_name(architectures) File "/home/ubuntu/miniconda3/envs/sglang/lib/python3.10/site-packages/sglang/srt/managers/router/model_runner.py", line 57, in get_model_cls_by_arch_name raise ValueError( ValueError: Unsupported architectures: LlavaQwenForCausalLM. Supported list: ['CohereForCausalLM', 'DbrxForCausalLM', 'GemmaForCausalLM', 'LlamaForCausalLM', 'LlavaLlamaForCausalLM', 'LlavaVidForCausalLM', 'MistralForCausalLM', 'MixtralForCausalLM', 'QWenLMHeadModel', 'Qwen2ForCausalLM', 'StableLmForCausalLM', 'YiVLForCausalLM']
The text was updated successfully, but these errors were encountered:
Same for https://huggingface.co/lmms-lab/llava-next-72b
Sorry, something went wrong.
This is despite the example here https://github.com/sgl-project/sglang/blob/main/examples/usage/llava/http_qwen_llava_test.py
Should I install from main?
Yes seems main works.
No branches or pull requests
Can't run llava next
The text was updated successfully, but these errors were encountered: