-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update olive configs to match new format #114
Conversation
"accelerators": [ | ||
{ | ||
"device": "gpu", | ||
"execution_providers": ["CPUExecutionProvider"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this the right place for this?
"accelerators": ["gpu"] | ||
} | ||
}, | ||
"execution_providers": ["CPUExecutionProvider"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wasn't sure where to add this property, should it stay here in this section?
@@ -2,16 +2,46 @@ | |||
"input_model":{ | |||
"type": "PyTorchModel", | |||
"config": { | |||
"model_script": "finetuning/qlora_user_script.py", | |||
"io_config": "get_merged_decoder_with_past_io_config", | |||
"dummy_inputs_func": "get_merged_decoder_with_past_dummy_inputs", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These 3 lines are the same across all files. The model_script property seems necessary, but are the other 2 (io_config and dummy_inputs_func) needed?
@@ -50,7 +50,7 @@ | |||
{ | |||
"providerType": "HuggingFace", | |||
"modelId": "microsoft/phi-2", | |||
"revision": "d3186761bf5c4409f7679359284066c25ab668ee" | |||
"revision": "main" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if we still need this change?
I tested these locally by pointing my extension to pull the templates from my branch.
Successfully tested local finetuning for:
✅ phi-2
✅ phi-1-5
✅ phi-3
✅ zephyr-7b-beta
✅ llama-2-7b
Could not test:
❌ mistral-7b
I was not able to test mistral-7b as I don't have access permission to Llama2 models (tried signing up on the ONNX sign up page):