Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about training #5

Open
newtreeaa opened this issue Sep 27, 2022 · 12 comments
Open

about training #5

newtreeaa opened this issue Sep 27, 2022 · 12 comments

Comments

@newtreeaa
Copy link

Hi, how long does it take you to train once?Under the condition that the effect will not be greatly reduced, how much can the training epoch be reduced?

@vb000
Copy link
Owner

vb000 commented Sep 27, 2022

A full training run took about 48 hours on a 4 V100 GPU machine. I think you could train for about 1-1.5 days to get good results, but not quite best..

@newtreeaa
Copy link
Author

A full training run took about 48 hours on a 4 V100 GPU machine. I think you could train for about 1-1.5 days to get good results, but not quite best..

@vb000 I use 4 3090 GPU machine.But one epoch training run took about 2 hours. A full training run took about 200 hours on a 4 3090 GPU machine.It runs much more slowly than yours. Do you know the reason? In addition, the training set contains 83876 videos. Are there so many videos in your training set?

@newtreeaa
Copy link
Author

@vb000 In your params.json, the batchsize is 8. I use 4 3090 GPU machine, should I change the batchsize to 32?

@vb000
Copy link
Owner

vb000 commented Sep 27, 2022

The training set had somewhere close to 64k videos. The exact link to the dataset is this.

No, we used batch size 8 with 4 GPUs. You could try 32 batch size for faster training at probably a small cost of accuracy..

@newtreeaa
Copy link
Author

The training set had somewhere close to 64k videos. The exact link to the dataset is this.

No, we used batch size 8 with 4 GPUs. You could try 32 batch size for faster training at probably a small cost of accuracy..

@vb000 Hi, is your code mixed precision or single precision?

@vb000
Copy link
Owner

vb000 commented Sep 27, 2022

It's single precision.. float32

@newtreeaa
Copy link
Author

newtreeaa commented Sep 27, 2022

@vb000
I hava some questions:

  1. The septuplet dataset consists of 91701 7-frame sequences with fixed resolution 448 x 256, extracted from 39k selected video clips from Vimeo-90k. The test set of it consists of 7823 7-frame sequences. Why your training set had somewhere close to 64k videos?
  2. Do you use the vimeo-90k test set as the validation set during training?
  3. The epoch in your paper is set to 80,but in your code is set to 100. Should I set to 100 or 80?
    Thank you in advance for your answer.

@vb000
Copy link
Owner

vb000 commented Sep 27, 2022

  1. The septuplet dataset consists of 91701 7-frame sequences with fixed resolution 448 x 256, extracted from 39k selected video clips from Vimeo-90k. The test set of it consists of 7823 7-frame sequences. Why your training set had somewhere close to 64k videos

Vimeo-90k train list has 64612 sequences, please use this link get the dataset we used.

  1. Do you use the vimeo-90k test set as the validation set during training?

No, because it only has 7 frame sequences. We used validation set from REDS dataset.

  1. The epoch in your paper is set to 80,but in your code is set to 100. Should I set to 100 or 80?

100 in the script is the max number of epochs. 80th epoch was the best performing epoch based on validation metrics.

@newtreeaa
Copy link
Author

  1. The septuplet dataset consists of 91701 7-frame sequences with fixed resolution 448 x 256, extracted from 39k selected video clips from Vimeo-90k. The test set of it consists of 7823 7-frame sequences. Why your training set had somewhere close to 64k videos

Vimeo-90k train list has 64612 sequences, please use this link get the dataset we used.

  1. Do you use the vimeo-90k test set as the validation set during training?

No, because it only has 7 frame sequences. We used validation set from REDS dataset.

  1. The epoch in your paper is set to 80,but in your code is set to 100. Should I set to 100 or 80?

100 in the script is the max number of epochs. 80th epoch was the best performing epoch based on validation metrics.

@vb000 REDS dataset means REDS4? REDS4 is set of 4 1280x720videos each containing 100 frames.

@vb000
Copy link
Owner

vb000 commented Sep 27, 2022

Hi,

Refer to the footnote in page 6, in the following paper for REDS train and val sets: https://openaccess.thecvf.com/content/CVPR2021/papers/Chan_BasicVSR_The_Search_for_Essential_Components_in_Video_Super-Resolution_and_CVPR_2021_paper.pdf

@newtreeaa
Copy link
Author

newtreeaa commented Sep 28, 2022

Hi,

Refer to the footnote in page 6, in the following paper for REDS train and val sets: https://openaccess.thecvf.com/content/CVPR2021/papers/Chan_BasicVSR_The_Search_for_Essential_Components_in_Video_Super-Resolution_and_CVPR_2021_paper.pdf

@vb000 Thank you very much for your reply. I still have questions:

  1. I want to confirm again that the validation set is REDSval4?
  2. In addtion, is it convenient for you to provide a training log? I want to confirm whether my training process is normal.
  3. I want to know whether the low resolution video you input will be processed into grayscale video in advance or the color low resolution video will be input and read in the grayscale mode in dataset.py?
  4. I also used 4 v100GPUs for training, and the number of videos in the training set is 64k. But training an epoch takes 2 hours, which is much slower than yours. What is the reason? Do you have any suggestions?
  5. Last, is it convenient for you to view your PCIe?Use the instruction lspci -vv to check LnkSta
    according this linkhttps://unix.stackexchange.com/questions/393/how-to-check-how-many-lanes-are-used-by-the-pcie-card

@vb000
Copy link
Owner

vb000 commented Oct 31, 2022

Hi,

Sorry for the late reply. Responses inline..

  1. I want to confirm again that the validation set is REDSval4?

Yes.

  1. I want to know whether the low resolution video you input will be processed into grayscale video in advance or the color low resolution video will be input and read in the grayscale mode in dataset.py?

Both modes work, we trained it using the later approach: we convert color LR frame to Lab color space and provide only L-channel to the model.

  1. In addtion, is it convenient for you to provide a training log? I want to confirm whether my training process is normal.
  2. Last, is it convenient for you to view your PCIe?Use the instruction lspci -vv to check LnkSta
    according this linkhttps://unix.stackexchange.com/questions/393/how-to-check-how-many-lanes-are-used-by-the-pcie-card

We trained it on cluster, where various machines are used based on availability. So, we currently do not have access to this data.

  1. I also used 4 v100GPUs for training, and the number of videos in the training set is 64k. But training an epoch takes 2 hours, which is much slower than yours. What is the reason? Do you have any suggestions?

I think that might be normal. I might have misquoted the runtimes, as I might have remembered them wrong. One suggestion I have is, you might want to make sure data loading is not the bottleneck.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants