-
Notifications
You must be signed in to change notification settings - Fork 330
update default model to resolve the vllm/model_executor issue #1985
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Dependency Review✅ No vulnerabilities or license issues found.Scanned FilesNone |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This pull request updates the default model configuration to resolve the vllm/model_executor issue.
- Added an environment variable for HF_TOKEN in the Docker Compose file
- Assigned HF_TOKEN the same value as HUGGINGFACEHUB_API_TOKEN to meet service requirements
Files not reviewed (1)
- DocSum/docker_compose/intel/set_env.sh: Language not supported
| http_proxy: ${http_proxy} | ||
| https_proxy: ${https_proxy} | ||
| HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN} | ||
| HF_TOKEN: ${HUGGINGFACEHUB_API_TOKEN} |
Copilot
AI
May 21, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add an inline comment clarifying the need for HF_TOKEN alongside HUGGINGFACEHUB_API_TOKEN, explaining if they target different service behaviors or provide backward compatibility for the vllm/model_executor.
Signed-off-by: chensuyue <suyue.chen@intel.com>
|
gaudi test failed with known issue opea-project/GenAIComps#1725 |
|
Workaround indeed. Just changing the default is not going to fix issue user is going to have when she tries to use some other model. I.e. this issue will need more investigation... |
…roject#1985) Signed-off-by: chensuyue <suyue.chen@intel.com> Signed-off-by: alexsin368 <alex.sin@intel.com>
Description
Update default model to resolve the vllm/model_executor issue.
Issues
workaround for opea-project/GenAIComps#1719
Type of change
List the type of change like below. Please delete options that are not relevant.
Dependencies
List the newly introduced 3rd party dependency if exists.
Tests
Describe the tests that you ran to verify your changes.