New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
where is the code of freezing the blocks that you don't want to finetune? #70
Comments
@A-zhudong I have the same question as you. |
I have the same question as you. |
Hello, sorry for the late reply! As mentioned in BEiT and MAE, in the end to end fintuning procedure, No blocks/layers need beed frozen. |
Thanks for your reply. But it seems that they did frozen some blocks and tested the effect as mentioned in "Masked Autoencoders Are Scalable Vision Learners". |
Oh, I am sorry that I forget this. You just need to freeze the blocks you want to, in the init function. |
OK, thank you. |
Hello, thanks for your implementation.
I have read the main part of your code, but I didn't find the code that controls the Partial fine-tuning. Could you please tell me where is that part in "run_class_finetuning.py", "modeling_finetune.py" or anywhere else?
Wainting for your reply, thank you.
The text was updated successfully, but these errors were encountered: