Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Can dbrx not use Vllm? #3723

Closed
bank010 opened this issue Mar 29, 2024 · 4 comments
Closed

[Bug]: Can dbrx not use Vllm? #3723

bank010 opened this issue Mar 29, 2024 · 4 comments
Labels
bug Something isn't working

Comments

@bank010
Copy link

bank010 commented Mar 29, 2024

Your current environment

python -m vllm.entrypoints.openai.api_server --model /data1/LLM/model/AI-ModelScope/dbrx-instruct --trust-remote-code

🐛 Describe the bug

ValueError: Model architectures ['DbrxForCausalLM'] are not supported for now. Supported architectures: ['AquilaModel', 'AquilaForCausalLM', 'BaiChuanForCausalLM', 'BaichuanForCausalLM', 'BloomForCausalLM', 'ChatGLMModel', 'ChatGLMForConditionalGeneration', 'DeciLMForCausalLM', 'DeepseekForCausalLM', 'FalconForCausalLM', 'GemmaForCausalLM', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTJForCausalLM', 'GPTNeoXForCausalLM', 'InternLMForCausalLM', 'InternLM2ForCausalLM', 'LlamaForCausalLM', 'LLaMAForCausalLM', 'MistralForCausalLM', 'MixtralForCausalLM', 'QuantMixtralForCausalLM', 'MptForCausalLM', 'MPTForCausalLM', 'OLMoForCausalLM', 'OPTForCausalLM', 'OrionForCausalLM', 'PhiForCausalLM', 'QWenLMHeadModel', 'Qwen2ForCausalLM', 'RWForCausalLM', 'StableLMEpochForCausalLM', 'StableLmForCausalLM', 'Starcoder2ForCausalLM']

@bank010 bank010 added the bug Something isn't working label Mar 29, 2024
@ywang96
Copy link
Collaborator

ywang96 commented Mar 29, 2024

Which version of vllm are you on? The support of dbrx-instruct is currently only merged to the main branch.

@bank010
Copy link
Author

bank010 commented Mar 29, 2024

Which version of vllm are you on? The support of dbrx-instruct is currently only merged to the main branch.

Name: vllm
Version: 0.3.3
Summary: A high-throughput and memory-efficient inference and serving engine for LLMs
Home-page: https://github.com/vllm-project/vllm
Author: vLLM Team
Author-email:
License: Apache 2.0

@ywang96
Copy link
Collaborator

ywang96 commented Mar 29, 2024

Version: 0.3.3

You will need to build vllm from source of the main branch in order to use dbrx-instruct.

@robertgshaw2-neuralmagic
Copy link
Collaborator

v0.4 should be good to go on this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants