-
Notifications
You must be signed in to change notification settings - Fork 722
Introduce CMake workflow #15804
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce CMake workflow #15804
Conversation
CMake workflow combines configure and build into 1 command. Instead of
doing:
```
cmake --preset llm \
-DEXECUTORCH_BUILD_CUDA=ON \
-DCMAKE_INSTALL_PREFIX=cmake-out \
-DCMAKE_BUILD_TYPE=Release \
-Bcmake-out -S.
cmake --build cmake-out -j$(nproc) --target install --config Release
```
We can simply do `cmake --workflow llm-release-cuda`. This largely
reduces the burden of running these cmake commands.
Next step I'm going to create workflow for the popular runners (llama,
whisper, voxtral etc) and further simplify the build command
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15804
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 8 Pending, 6 Unrelated FailuresAs of commit 8dd6a3c with merge base 6de1f4e ( NEW FAILURE - The following job has failed:
FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
CMake workflow combines configure and build into 1 command. Instead of
doing:
```
cmake --preset llm \
-DEXECUTORCH_BUILD_CUDA=ON \
-DCMAKE_INSTALL_PREFIX=cmake-out \
-DCMAKE_BUILD_TYPE=Release \
-Bcmake-out -S.
cmake --build cmake-out -j$(nproc) --target install --config Release
```
We can simply do `cmake --workflow llm-release-cuda`. This largely
reduces the burden of running these cmake commands.
Next step I'm going to create workflow for the popular runners (llama,
whisper, voxtral etc) and further simplify the build command
ghstack-source-id: d86adc5
Pull Request resolved: #15804
CMake workflow combines configure and build into 1 command. Instead of
doing:
```
cmake --preset llm \
-DEXECUTORCH_BUILD_CUDA=ON \
-DCMAKE_INSTALL_PREFIX=cmake-out \
-DCMAKE_BUILD_TYPE=Release \
-Bcmake-out -S.
cmake --build cmake-out -j$(nproc) --target install --config Release
```
We can simply do `cmake --workflow llm-release-cuda`. This largely
reduces the burden of running these cmake commands.
Next step I'm going to create workflow for the popular runners (llama,
whisper, voxtral etc) and further simplify the build command
[ghstack-poisoned]
CMake workflow combines configure and build into 1 command. Instead of
doing:
```
cmake --preset llm \
-DEXECUTORCH_BUILD_CUDA=ON \
-DCMAKE_INSTALL_PREFIX=cmake-out \
-DCMAKE_BUILD_TYPE=Release \
-Bcmake-out -S.
cmake --build cmake-out -j$(nproc) --target install --config Release
```
We can simply do `cmake --workflow llm-release-cuda`. This largely
reduces the burden of running these cmake commands.
Next step I'm going to create workflow for the popular runners (llama,
whisper, voxtral etc) and further simplify the build command
ghstack-source-id: fec7b9f
Pull Request resolved: #15804
| - Uses `llm-release` configure preset (sets `CMAKE_BUILD_TYPE=Release`) | ||
| - Uses `llm-release-install` build preset (builds the `install` target with parallel jobs) | ||
| - Installs artifacts to `cmake-out/` directory | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can add some instruction on how to add a new worfklow. Is this correct?
To add a new workflow:
1. Add a configure preset, e.g. new-workflow
2. Add a build preset that depends on (1), e.g. new-workflow-install
3. You should be able to run cmake --workflow new-workflow-install
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. Let me add your words into README.md
lucylq
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a great change, thank you!
CMake workflow combines configure and build into 1 command. Instead of
doing:
```
cmake --preset llm \
-DEXECUTORCH_BUILD_CUDA=ON \
-DCMAKE_INSTALL_PREFIX=cmake-out \
-DCMAKE_BUILD_TYPE=Release \
-Bcmake-out -S.
cmake --build cmake-out -j$(nproc) --target install --config Release
```
We can simply do `cmake --workflow llm-release-cuda`. This largely
reduces the burden of running these cmake commands.
Next step I'm going to create workflow for the popular runners (llama,
whisper, voxtral etc) and further simplify the build command
[ghstack-poisoned]
CMake workflow combines configure and build into 1 command. Instead of
doing:
```
cmake --preset llm \
-DEXECUTORCH_BUILD_CUDA=ON \
-DCMAKE_INSTALL_PREFIX=cmake-out \
-DCMAKE_BUILD_TYPE=Release \
-Bcmake-out -S.
cmake --build cmake-out -j$(nproc) --target install --config Release
```
We can simply do `cmake --workflow llm-release-cuda`. This largely
reduces the burden of running these cmake commands.
Next step I'm going to create workflow for the popular runners (llama,
whisper, voxtral etc) and further simplify the build command
ghstack-source-id: 21411ca
Pull Request resolved: #15804
59a72cd
into
gh/larryliu0820/81/base
This PR was created by the merge bot to help merge the original PR into the main branch. ghstack PR number: #15804 by @larryliu0820 ^ Please use this as the source of truth for the PR details, comments, and reviews ghstack PR base: https://github.com/pytorch/executorch/tree/gh/larryliu0820/81/base ghstack PR head: https://github.com/pytorch/executorch/tree/gh/larryliu0820/81/head Merge bot PR base: https://github.com/pytorch/executorch/tree/main Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/larryliu0820/81/orig @diff-train-skip-merge Co-authored-by: Mengwei Liu <larryliu@meta.com>
Stack from ghstack (oldest at bottom):
CMake workflow combines configure and build into 1 command. Instead of
doing:
We can simply do
cmake --workflow llm-release-cuda. This largelyreduces the burden of running these cmake commands.
Next step I'm going to create workflow for the popular runners (llama,
whisper, voxtral etc) and further simplify the build command