You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Whether opencompass will support vllm during inference pipeline?
Although lmdeploy support a lot models, but I may test some other models such as BLOOM, GPT BigCode, Falcon etc. So I wonder if OpenCompass will support vllm as backend officially in the future?
If I implement it by myself, are there any suggestions or guidelines?
Will you implement it?
I would like to implement this feature and create a PR!
The text was updated successfully, but these errors were encountered:
Describe the feature
Whether opencompass will support vllm during inference pipeline?
Although lmdeploy support a lot models, but I may test some other models such as BLOOM, GPT BigCode, Falcon etc. So I wonder if OpenCompass will support vllm as backend officially in the future?
If I implement it by myself, are there any suggestions or guidelines?
Will you implement it?
The text was updated successfully, but these errors were encountered: