Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about "init_checkpoint" and "output_dir" checkpint #21

Closed
shuxiaobo opened this issue Dec 16, 2019 · 9 comments
Closed

Question about "init_checkpoint" and "output_dir" checkpint #21

shuxiaobo opened this issue Dec 16, 2019 · 9 comments

Comments

@shuxiaobo
Copy link

Dear Author @guotong1988 :

I hava a question about the difference between "init_checkpoint" and model checkpoint save in "output_dir", when I want to continue to train a model which fintuned on bert model, I was confused on "init_checkpoint" and "output_dir", I found the code will init the model with "init_checkpoint" and then restore model in "output_dir"
Could you please help me to figure out the difference of them?
Thanks very much!

@guotong1988
Copy link
Owner

guotong1988 commented Dec 16, 2019

It is a good question, you should be careful for them. The only thing you keep in mind is when you are working based on this project, you should make sure that the model is loaded right.

In fact I do not know the answer exactly. Something may not right but helpful:
https://blog.csdn.net/guotong1988/article/details/100539565

@guotong1988
Copy link
Owner

guotong1988 commented Dec 16, 2019

In summary,
I guess init_checkpoint is for the training of beginning.
output_dir is for the predicting.
You should delete output_dir for training and delete the init_checkpoint for predicting, in order to make everything exact right.

@shuxiaobo
Copy link
Author

Thx~, I see that, It's helpful for me to understand the function.
But is there any right way to restore a interupted model and continue to train (modify num_train_steps = 300000 to num_train_steps= 600000)?
When I fill out both of the parameter with init_checkpoing = 'google_bert_model.ckpt' and output_dir = 'my_finetuing_bertmodel.ckpt', I found that the loss of the model is not cut off, but continuing, but higher than before step by step. I don't know is this right. Do you have any suggestion?

@guotong1988
Copy link
Owner

@guotong1988
Copy link
Owner

好像要加上-300000之类的

@guotong1988
Copy link
Owner

只能说试试,我记得我当时也遇到过这个问题,没深究。

@shuxiaobo
Copy link
Author

感谢~~,it works。
按照您的意思,总的来说应该是这样的:

  1. Init_checkpoint是为了确定哪些参数是可以restore from the checkpoint的,也就是后面log 中有 INIT_FROM_CKPT

  2. output_dir 里的模型位置可以用来填充Init_checkpoint参数,如果两个参数都有的话,应该先从checkpoint确定哪些参数可以restore from the checkpoint, 应该是后面restore的时候,带INIT_FROM_CKPT的参数会从output_dir中模型中restore。

不知理解是否正确。

@guotong1988
Copy link
Owner

不知道,以实践结果为准。
it works?什么works?

@shuxiaobo
Copy link
Author

好吧,并不work。。。。Thx~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants