Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

where is the code of freezing the blocks that you don't want to finetune? #70

Closed
A-zhudong opened this issue Dec 19, 2021 · 6 comments
Closed

Comments

@A-zhudong
Copy link

Hello, thanks for your implementation.

I have read the main part of your code, but I didn't find the code that controls the Partial fine-tuning. Could you please tell me where is that part in "run_class_finetuning.py", "modeling_finetune.py" or anywhere else?

Wainting for your reply, thank you.

@SUNJIMENG
Copy link

@A-zhudong I have the same question as you.
Waiting for reply.

@SoonFa
Copy link

SoonFa commented Dec 21, 2021

I have the same question as you.
Waiting for reply.

@pengzhiliang
Copy link
Owner

Hello, sorry for the late reply!

As mentioned in BEiT and MAE, in the end to end fintuning procedure, No blocks/layers need beed frozen.
But the LR of each blocks in different, you can find it in here

@A-zhudong
Copy link
Author

Hello, sorry for the late reply!

As mentioned in BEiT and MAE, in the end to end fintuning procedure, No blocks/layers need beed frozen. But the LR of each blocks in different, you can find it in here

Thanks for your reply.

But it seems that they did frozen some blocks and tested the effect as mentioned in "Masked Autoencoders Are Scalable Vision Learners".

Maybe it is better if we implement that part?
1640162609(1)

@pengzhiliang
Copy link
Owner

Oh, I am sorry that I forget this.

You just need to freeze the blocks you want to, in the init function.

@A-zhudong
Copy link
Author

OK, thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants