Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

some problems on data augmentation #10

Open
jsczzzk opened this issue Jan 3, 2019 · 6 comments
Open

some problems on data augmentation #10

jsczzzk opened this issue Jan 3, 2019 · 6 comments

Comments

@jsczzzk
Copy link

jsczzzk commented Jan 3, 2019

Hi:
Thanks for your great job. While I reading the code,I found that data augmentations used in the code were listed as follows:
1.Horizontally flip 50% of images
2.Vertically flip 50% of images
3.Translate 50% of images by a value between -5 and +5 percent of original size on x- and y-axis independently
4.Scale 50% of images by a factor between 95 and 105 percent of original size
But in the original FlowNet ,the kinds of augmentation were more than above and a little different.
1.Translate all of images by a value between -20 and +20 percent of original size on x- and y-axis independently
2.Scale all of images by a factor between 90 and 200 percent of original size
3.No horizontal flip and vertically flip is used
4.Add the Gaussian noise that has a sigma uniformly sampled from [0, 0.04]
5.Add the contrast sampled within [−0.8, 0.4]
6.Multiplicative color changes to the RGB channels per image from [0.5, 2]
7.gamma values from [0.7, 1.5] and additive brightness changes using Gaussian with a sigma of 0.2.
Will the network trained using the above methods be different?
Expecting your reply. Thanks in advance!

@jeffbaena
Copy link

Hi @GuUUSSS , I am also inspecting data augmentation of PWC-Net. Could you provide the paper reference? I think that your point 3 might be mistaken. As far as I know in "Model matters..." https://arxiv.org/pdf/1809.05571.pdf they do horizontal flipping (no vertical) when fine tuining on sintel

Screenshot 2019-05-24 at 16 41 27

@jsczzzk
Copy link
Author

jsczzzk commented May 25, 2019

PWC-Net uses caffe to do data augmentation as Flownet do. So, i refer to the Flownet paper https://lmb.informatik.uni-freiburg.de/Publications/2015/DFIB15/flownet.pdf.
image
Flownet doesn't use horizontal flip and vertically flip.
To inspect data augmentation I guess you might need this paper https://lmb.informatik.uni-freiburg.de/Publications/2018/MIFDB18/paper-1801.06397.pdf .
This paper shows that different data augmentations have different effects to the model
image.

@jeffbaena
Copy link

Thanks, you are right, they all use the same augmentation. But PWC-Net ROB, when fine tuning on sintel also performs horizontal frame flipping, It is written and I also have tested it empirically

@hellochick
Copy link

Hey @ZhongkaiZhou and @jeffbaena ,

Thanks for your clarification about the augmentation details about PWCNet and FlowNet 2.0.

I wonder if you have successfully reproduced the results on mpi-sintel test set?

Recently, I am trying to reproduce the PWCNet myself. The training AEPE and validation AEPE looks great when fine-tuning sintel, however, it performed really bad on the test set.

The situation is really similar to the issue here lmb-freiburg/flownet2#191 opened by @ZhongkaiZhou.

I am not sure if the bad result is because that, I didn't apply the whole augmentation as described in the paper. (i.e., I only use scale/flip/translate without incremental changes on second frames).

@jsczzzk
Copy link
Author

jsczzzk commented Jun 14, 2020

Sorry, i have not reproduced the results on MPI-Sintel test set successfully. I think MPI-Sintel train and MPI-Sintel test are very different, so low epe on train may not lead to the low epe on test. All we can do is avoid overfitting and the whole augmentation should heip it. By the way, you can draw on this project to complete the whole augmentation.

@hellochick
Copy link

hellochick commented Jun 14, 2020

@ZhongkaiZhou ,

Thanks for your quick and kind reply. Your reference to that project is really helpful to me, as there are few projects implementing the flow augmentation based on python (most of them use C or Caffe).

I really appreciate your help. If I have progress someday, I'll let you know!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants