Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Offset mean is larger than 100 in PCD Align Module' DCNV2.Could you give me some advice to minimize it? #16

Open
huihuiustc opened this issue Jun 4, 2019 · 9 comments

Comments

@huihuiustc
Copy link

No description provided.

@DLwbm123
Copy link

DLwbm123 commented Jun 5, 2019

Same to me.

@xinntao
Copy link
Owner

xinntao commented Jun 5, 2019

Yes, indeed we also found the training with DCN is unstable.
We will write down the issues we met during the competition in this repo later. And unstable training is one of them.
There are still a lot of things that we can improve on EDVR and we are also exploring some of them.

During the competition, we trained the large model from smaller ones and used a smaller learning rate for dcn. Even with these tricks, the over-large offsets are occasionally met. And we just resumed it from a normal checkpoint if we met.

@huihuiustc
Copy link
Author

What dou mean that you trained the large model from smaller ones.Or this one:"We initialize deeper networks by parameters from shallower ones for faster convergence"in your paper.

For instance. We use kaiming_normal initialize all parameter,then freeze TSA and Reconstruction Module,only request_grad in the PCD align and PreDeblur Module.

Thanks for your attention.

@xinntao
Copy link
Owner

xinntao commented Jun 7, 2019

  1. Yes, we first train shallower ones.
  2. We will release some models and also the training codes to train from scratch. But their performances are not as good as the competition models.

@huihuiustc
Copy link
Author

谢谢大佬的回复,确实是很厉害的工作和研究。

我们正在尝试先把可变卷积换成正常的卷积,然后训练得到的初始model,然后用这个模型训练网络。
接着冻结部分模型块再开始训练。

@xinntao
Copy link
Owner

xinntao commented Jun 10, 2019

Actually, DCN is relatively important. So you can first train a small network with DCN (w/o TSA).
We are running these experiments and will release it as soon as possible.

@splinter21
Copy link

splinter21 commented Jun 11, 2019

1、“We trained the large model from smaller ones and used a smaller learning rate for dcn.”
Do you mean this(for example):
step 1>5front-10back with DCN+TSA,lr=1e-4,(model S(hallow)).
step 2>5front-40back with DCN+TSA,lr(DCN)=5e-5(e.g.),lr_others=1e-4. And parameters of S except 30 back blocks is copied to model D(eep).

2、"You can first train a small network with DCN (w/o TSA)"
Do you mean, only DCN is needed to be pretrained, another paramters after DCN is not needed(not useful for deeper model).
For example, I can train 5front blocks with DCN, w/o TSA, and with very shallow SR network after DCN.
Then, the DCN is pretrained, paramters after DCN can be abandoned, and I can change SR network whatever I like after DCN?

3、This pretrained-DCN-trick can't make the final model D with a deeper or wider(I mean, change the feature extraction layers before DCN) DCN module compared with model S, because DCN paramters are needed to be copied. Is it right?

4、For the second step, there are two choices for DCN. The first one, smaller lr for DCN. The second one, freeze DCN module. The second choice can save many time and GPU memory for training. Is it suitable?
@xinntao

@xinntao
Copy link
Owner

xinntao commented Jun 14, 2019

We have updated the training codes and configs. We provide training scripts for the model with Channel=128, Back RB=10.
The learning rate scheme is different from that in the competition. But it is more effective.

  1. train with the script train_EDVR_woTSA_M.yml
  2. then train with the script train_EDVR_M.yml

You can try this.

@tongjuntx
Copy link

谢谢大佬的回复,确实是很厉害的工作和研究。

我们正在尝试先把可变卷积换成正常的卷积,然后训练得到的初始model,然后用这个模型训练网络。
接着冻结部分模型块再开始训练。

have you succeed?how about the effect?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants