-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
First refactor train #3
Conversation
Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com>
Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com>
Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com>
@dataclass | ||
class lora_config: | ||
class LoraConfig: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dataclass names have to be camel case as per Python naming conventions. I want to add a black formatter to this repo going forward, and it would fail without this change. To handle name resolution with HF peft, I have renamed the file to peft_config. In code we would refer to it as peft_config.LoraConfig
If there is a need to further to distinguish the two in future, we can rename them as CustomLoraConfig
Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com>
Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com>
@@ -30,7 +30,6 @@ class DataArguments: | |||
|
|||
@dataclass | |||
class TrainingArguments(transformers.TrainingArguments): | |||
peft_method: str = "lora" # None, pt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I got rid of this. users can pass the relevant peft config object in train() that will be passed directly to the trainer
r: int = 8 | ||
lora_alpha: int = 32 | ||
target_modules: List[str] = field(default_factory=lambda: ["q_proj", "v_proj"]) | ||
bias = "none" | ||
task_type: str = "CAUSAL_LM" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since the repo only supports CausalLMs we need not expose task_type to user as an argument.
config = prompt_tuning_config() | ||
update_config(config, **kwargs) | ||
peft_config = PromptTuningConfig(**asdict(config)) | ||
def get_hf_peft_config(task_type, tuning_config): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since train() now accepts the peft_config objects, it makes sense to use those to get the corresponding HF peft config.
Earlier functionality to pass all kwargs has been moved to create_tuning_config
utility which can be combined with get_hf_peft_config
if needed.
Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com>
Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com>
Signed-off-by: Sukriti-Sharma4 <sukriti.sharma4@ibm.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
* unsloth-gptq-tritonv2-mixtral * addressed changes to plugin assertions * Update tuning/acceleration/plugins/framework_plugin_unsloth_autogptq.py Co-authored-by: Yu Chin Fabian Lim <fabianlim@users.noreply.github.com> Signed-off-by: achew010 <165894159+achew010@users.noreply.github.com> * pass in explicit dtype to FastLanguageModel in model_loader * Update tuning/acceleration/plugins/framework_plugin_unsloth_autogptq.py Co-authored-by: Yu Chin Fabian Lim <fabianlim@users.noreply.github.com> Signed-off-by: achew010 <165894159+achew010@users.noreply.github.com> * Update tuning/acceleration/plugins/framework_plugin_unsloth_autogptq.py Co-authored-by: Yu Chin Fabian Lim <fabianlim@users.noreply.github.com> Signed-off-by: achew010 <165894159+achew010@users.noreply.github.com> * Update tuning/acceleration/plugins/framework_plugin_unsloth_autogptq.py Co-authored-by: Yu Chin Fabian Lim <fabianlim@users.noreply.github.com> Signed-off-by: achew010 <165894159+achew010@users.noreply.github.com> * removed device_map argument from model loading --------- Signed-off-by: achew010 <165894159+achew010@users.noreply.github.com> Co-authored-by: Yu Chin Fabian Lim <fabianlim@users.noreply.github.com>
…s/konflux updating konflux pipeline timeout
As part of first refactor , I have created a function for train which will accept predefined dataclasses as arguments instead of taking all possible parameters.
This was done for the following reasons:
The main function is reading all arguments from command line and passing relevant params to train() as an example.
I will move the main() to an example script going forward, and we can continue to call that main() function with command line arguments or let it serve as a reference for users who want to call train() function directly .
Usage of the script has not changed in this PR. All code changes are structural only, and I verified that pt, lora and ft work same was as they do in main branch