-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lora implementation in finetuning and evaluation #638
base: dev
Are you sure you want to change the base?
Lora implementation in finetuning and evaluation #638
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anwai98 This is the script I used for finetuning on covid_if data. I refer to this one in the data I sent as 'my impl'.
joint_model_params.append(params) | ||
|
||
optimizer = torch.optim.Adam(joint_model_params, lr=1e-5) | ||
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode="min", factor=0.9, patience=10) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anwai98 The patience for the learning rate scheduler was set to 10 here but to 3 in the 'Resource Efficient Impl' - Could that be the reason for the performance difference ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are a few more changes. We had a lower learning rate and a different optimizer as well (https://github.com/caroteu/micro-sam/blob/dcd6ecdc5ef600e07670db27ccfba54e81f156f7/finetuning/specialists/resource-efficient/covid_if_finetuning.py#L135-L136)
And probably 100 epochs in the other experiments might translate to a bit more than 10k iterations here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potentially that could lead to a bit of a performance difference (unless it's a very severe drop in performance, that would be a different discussion)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The optimizer is actually the same - I ran this here with Adam and 1e-5 too to make it consistent with the other workflow. The results I send you both have adam and lr=1e-5
No description provided.