Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handling of additional trainable params from Hub utils #48

Closed
sayakpaul opened this issue Jan 30, 2023 · 5 comments · Fixed by #50
Closed

Handling of additional trainable params from Hub utils #48

sayakpaul opened this issue Jan 30, 2023 · 5 comments · Fixed by #50

Comments

@sayakpaul
Copy link
Member

@younesbelkada

Now that #39 has been merged, I wanted to focus on #44 and #45.

As you can see here, after wrapping a base model with LoraModel for image classification fine-tuning, I am still having to

for param in lora_model.classifier.parameters():
    param.requires_grad = True

I am aware that if we fix the internal task types of LoraConfig, we wouldn't need to do this. But on the other hand, this goes on to show the flexibility of PEFT, isn't it?

In this case, would the Hub utilities introduced in #39 take care of the additional trainable parameters?

@pacman100
Copy link
Contributor

Hello Sayak, that case isn't handled yet, it would require changes to the LoraConfig to accept modules_to_save param in which users can specify additional trainable layers apart from lora layers. Will raise PR for same in sometime

@sayakpaul
Copy link
Member Author

Yes, let's do that!

@pacman100
Copy link
Contributor

Hello, the above PR should resolve this issue.

@sayakpaul
Copy link
Member Author

Let me try to rework my example and see if everything works as expected.

@pacman100
Copy link
Contributor

Shared the example notebook offline with you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants