-
Notifications
You must be signed in to change notification settings - Fork 754
Add lora test using qwen #16161
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add lora test using qwen #16161
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16161
Note: Links to docs will display an error until the docs builds have been completed. ⏳ No Failures, 191 PendingAs of commit 84d40b3 with merge base 56e131b ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
799984a to
7e99437
Compare
This PR needs a
|
b96f71b to
36c008f
Compare
3efa4c9 to
d177062
Compare
e4096f8 to
3c08987
Compare
| source "$(dirname "${BASH_SOURCE[0]}")/utils.sh" | ||
|
|
||
| cmake_install_executorch_libraries() { | ||
| echo "Installing libexecutorch.a, libextension_module.so, libportable_ops_lib.a" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove this? Seems out dated
.ci/scripts/test_lora.sh
Outdated
| echo "Building llama runner" | ||
| pushd extension/llm/tokenizers | ||
| echo "Updating tokenizers submodule" | ||
| git submodule update --init | ||
| popd | ||
| dir="examples/models/llama" | ||
| retry cmake \ | ||
| -DBUILD_TESTING=OFF \ | ||
| -DCMAKE_INSTALL_PREFIX=cmake-out \ | ||
| -DCMAKE_BUILD_TYPE=Release \ | ||
| -Bcmake-out/${dir} \ | ||
| ${dir} | ||
| cmake --build cmake-out/${dir} -j9 --config Release |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
try make llama-cpu
3c08987 to
84d40b3
Compare
Summary
Use qwen0.6B with unsloth (instead of llama1B with torchtune) for lora test.
TODO: add quantized test after #15951