-
-
Notifications
You must be signed in to change notification settings - Fork 9.5k
Make sure the correct version of ao is installed in CI #19704
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @drisspg, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request addresses an issue in the CI pipeline where the torchao
library was being installed with a hardcoded CUDA version. The change updates the installation command to dynamically detect and use the appropriate CUDA version based on the environment, improving the robustness and flexibility of the CI tests across different CUDA configurations.
Highlights
- CI Configuration Update: Modified the
.buildkite/test-pipeline.yaml
to dynamically determine the CUDA version fortorchao
installation within the CI pipeline. - Dynamic CUDA Version Detection: Replaced the hardcoded
cu126
in thetorchao
installation URL with a shell command (cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.')
) that extracts the major and minor version from theCUDA_VERSION
environment variable, ensuring the correct PyTorch wheel index is used.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
cc @huydhn, @houseroad |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
The pull request updates the CI pipeline to dynamically determine the CUDA version for installing torchao
, replacing a hardcoded version. This is a good improvement for flexibility.
My review focuses on the shell command used to parse the CUDA_VERSION
environment variable. I've identified a potential robustness issue with the current parsing logic and suggested an alternative using grep
for more reliable extraction of the major and minor version components, along with ensuring the variable is quoted. This should make the URL construction less prone to errors if CUDA_VERSION
has slight variations in its format.
@@ -432,7 +432,7 @@ steps: | |||
commands: | |||
# temporary install here since we need nightly, will move to requirements/test.in | |||
# after torchao 0.12 release | |||
- pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cu126 | |||
- pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current method of parsing CUDA_VERSION
using echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.'
has a couple of potential robustness issues:
- Unquoted Variable:
$CUDA_VERSION
is unquoted. While version strings are typically simple, unquoted variables can lead to unexpected behavior (like word splitting or globbing) ifCUDA_VERSION
were to contain spaces or shell metacharacters. - Parsing Brittleness: The
cut -d. -f1,2
command assumes a fairly cleanX.Y
orX.Y.Z
structure. IfCUDA_VERSION
contains additional descriptive text (e.g.,11.7, V11.7.99
) or doesn't strictly adhere to this, the parsing might produce an incorrect suffix for the URL (e.g.,cu117, V11
instead ofcu117
), potentially causing thepip install
command to fail or fetch an unintended package.
A more robust approach is to explicitly quote the variable and use grep
with a regular expression to extract the X.Y
part of the version. This ensures that CUDA_VERSION
must start with a valid major.minor
pattern and discards any subsequent patch versions or extraneous text, leading to a more reliable URL construction.
- pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cu$(echo "$CUDA_VERSION" | grep -oE '^[0-9]+\.[0-9]+' | tr -d '.')
Head branch was pushed to by a user without write access
0fc6e80
to
aba6c7b
Compare
Can you merge from main to see if the CI failures can be resolved? |
Signed-off-by: drisspg <drisspguessous@gmail.com>
aba6c7b
to
1066dab
Compare
Essential Elements of an Effective PR Description Checklist
Purpose
Fix hardcoded CUDA version in torchao installation to use dynamic CUDA version detection
Test Plan
Run the quantization test pipeline with different CUDA versions to ensure torchao is installed from the correct index URL:
# The pipeline will now use the appropriate CUDA version CUDA_VERSION=12.8 .buildkite/test-pipeline.yaml
Test Result
The torchao package will be installed from the correct PyTorch wheel index based on the system's CUDA version (e.g., cu128 for CUDA 12.8, cu121 for CUDA 12.1) instead of always using cu126.