Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🦅 Phi2 Fine tune example #1030

Merged
merged 6 commits into from
Mar 26, 2024
Merged

🦅 Phi2 Fine tune example #1030

merged 6 commits into from
Mar 26, 2024

Conversation

trajepl
Copy link
Contributor

@trajepl trajepl commented Mar 21, 2024

Describe your changes

  1. add phi2 fine tune example
  2. try to optimize fine tuned model
  • need set torch_dtype as float32 to avoid the inconsistent weights shape(adapters bf16, but base model with fp32)
  • merge adaptors into base model to make dynamo conversion work.

Checklist before requesting a review

  • Add unit tests for this change.
  • Make sure all tests can pass.
  • Update documents if necessary.
  • Lint and apply fixes to your code by running lintrunner -a
  • Is this a user-facing change? If yes, give a description of this change to be included in the release notes.
  • Is this PR including examples changes? If yes, please remember to update example documentation in a follow-up PR.

(Optional) Issue link

Copy link
Contributor

@jambayk jambayk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please make the adapter merge optional.

@jambayk jambayk self-requested a review March 22, 2024 14:21
@@ -35,7 +53,7 @@ Above commands will generate optimized models with given model_type and save the
Besides, for better generation experience, this example also let use use [Optimum](https://huggingface.co/docs/optimum/v1.2.1/en/onnxruntime/modeling_ort) to generate optimized models.
Then use can call `model.generate` easily to run inference with the optimized model.
```bash
# optimum optimization
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we don't support it, should we remove this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the model that did not be fine tuned, optimum works well.

if args.finetune_method:
pass_flows[0].append(args.finetune_method)
template_json["systems"]["local_system"]["config"]["accelerators"][0]["device"] = "gpu"
# torch fine tuning does not require execution provider, just set it to CUDAExecutionProvider
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought ep is not mandatorily required after Mike's PR got merged. Is it still required?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we don't provide an EP, it would loop over the installed EPs which is not what we want.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jambayk will "loop over the installed eps" sill be executed even if we are not running any pass needed onnxruntime like lora/snpe/openvino?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we haven't made the changes that would not do this for workflows with no ort targeting passes.

Mike's PR didn't really change any behavior of the workflows. It only updated the configs to collect the hardware/ep related options together.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but the qlora/snpe/openvino pass is EP agnostic, so even with "loop over the installed eps", the pass only run once.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cache does take care of the rerun but still, the looping behavior and multiple footprint/outputs for workflows with no onnx models is not ideal. That's what will be good to improve.

@@ -59,9 +59,17 @@ def get_args(raw_args):
parser.add_argument(
"--model_type",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if model_type and finetune_method are None?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed by raising error.

@jambayk jambayk dismissed their stale review March 23, 2024 00:03

Fulfilled

@trajepl trajepl merged commit e4f3eee into main Mar 26, 2024
33 checks passed
@trajepl trajepl deleted the jiapli/phi2_fine_tune branch March 26, 2024 02:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants