Skip to content

Conversation

@lucylq
Copy link
Contributor

@lucylq lucylq commented Dec 9, 2025

Summary

Use qwen0.6B with unsloth (instead of llama1B with torchtune) for lora test.

  1. Smaller model / quicker test.
  2. Eventually remove dependency on torchtune.
  3. Qwen is not gated on HF.

TODO: add quantized test after #15951

Expected result prefix: 
<|im_start|>user Calculate 15% of 80?<|im_end|><|im_start|>assistant
To calculate 15% of 80, we can multiply 80 by 0.15.
80 * 0.15 = 12
So, 15% of 80 is 12.
#### 12
The answer is: 12<|im_end|>
+ echo 'Actual result: 
<|im_start|>user Calculate 15% of 80?<|im_end|><|im_start|>assistant
To calculate 15% of 80, we can multiply 80 by 0.15.
80 * 0.15 = 12
So, 15% of 80 is 12.
#### 12
The answer is: 12<|im_end|>

PyTorchObserver {"prompt_tokens":15,"generated_tokens":65,"model_load_start_ms":1765320124550,"model_load_end_ms":1765320127516,"inference_start_ms":1765320152867,"inference_end_ms":1765320178119,"prompt_eval_end_ms":1765320153334,"first_token_ms":1765320153334,"aggregate_sampling_time_ms":19,"SCALING_FACTOR_UNITS_PER_SECOND":1000}'
Actual result: 
<|im_start|>user Calculate 15% of 80?<|im_end|><|im_start|>assistant
To calculate 15% of 80, we can multiply 80 by 0.15.
80 * 0.15 = 12
So, 15% of 80 is 12.
#### 12
The answer is: 12<|im_end|>

PyTorchObserver {"prompt_tokens":15,"generated_tokens":65,"model_load_start_ms":1765320124550,"model_load_end_ms":1765320127516,"inference_start_ms":1765320152867,"inference_end_ms":1765320178119,"prompt_eval_end_ms":1765320153334,"first_token_ms":1765320153334,"aggregate_sampling_time_ms":19,"SCALING_FACTOR_UNITS_PER_SECOND":1000}
+ echo Success
Success

@pytorch-bot
Copy link

pytorch-bot bot commented Dec 9, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16161

Note: Links to docs will display an error until the docs builds have been completed.

⏳ No Failures, 191 Pending

As of commit 84d40b3 with merge base 56e131b (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 9, 2025
@lucylq lucylq force-pushed the lfq.qwen-lora-test branch from 799984a to 7e99437 Compare December 9, 2025 18:34
@github-actions
Copy link

github-actions bot commented Dec 9, 2025

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@lucylq lucylq force-pushed the lfq.qwen-lora-test branch 2 times, most recently from b96f71b to 36c008f Compare December 9, 2025 19:44
@lucylq lucylq requested a review from larryliu0820 December 9, 2025 19:53
@lucylq lucylq force-pushed the lfq.qwen-lora-test branch 2 times, most recently from 3efa4c9 to d177062 Compare December 9, 2025 21:25
@lucylq lucylq marked this pull request as ready for review December 9, 2025 21:25
@lucylq lucylq requested a review from jackzhxng as a code owner December 9, 2025 21:25
@lucylq lucylq force-pushed the lfq.qwen-lora-test branch 3 times, most recently from e4096f8 to 3c08987 Compare December 9, 2025 22:48
source "$(dirname "${BASH_SOURCE[0]}")/utils.sh"

cmake_install_executorch_libraries() {
echo "Installing libexecutorch.a, libextension_module.so, libportable_ops_lib.a"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this? Seems out dated

Comment on lines 19 to 31
echo "Building llama runner"
pushd extension/llm/tokenizers
echo "Updating tokenizers submodule"
git submodule update --init
popd
dir="examples/models/llama"
retry cmake \
-DBUILD_TESTING=OFF \
-DCMAKE_INSTALL_PREFIX=cmake-out \
-DCMAKE_BUILD_TYPE=Release \
-Bcmake-out/${dir} \
${dir}
cmake --build cmake-out/${dir} -j9 --config Release
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

try make llama-cpu

@lucylq lucylq force-pushed the lfq.qwen-lora-test branch from 3c08987 to 84d40b3 Compare December 10, 2025 19:47
@lucylq lucylq merged commit c3a53f3 into main Dec 10, 2025
300 of 301 checks passed
@lucylq lucylq deleted the lfq.qwen-lora-test branch December 10, 2025 20:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants