Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

get unstable results when evaluate the model #14

Closed
LiangXu123 opened this issue Apr 11, 2018 · 3 comments
Closed

get unstable results when evaluate the model #14

LiangXu123 opened this issue Apr 11, 2018 · 3 comments

Comments

@LiangXu123
Copy link

LiangXu123 commented Apr 11, 2018

Since we have nn.Dropout() layer in our model,during test,we should set the model to evaluation status by self.model.eval(), but in your code, test.py there is no such setting,so the Dropout layer is still working during test,thus we will get unstable output even from the same model and the same input.

But the worst part is, when we do not set self.model.eval(),we can get unstable but relativly correct output, but after you do set self.model.eval(),you will get even worse result,and in most of the time ,the result is totally wrong after a dozen of frames since the first frame is initialized by the groundtruth box.

I don't get it, and when I check the original GOTURN in caffe,I found the same setting,which means the train and test phase are using the same .prototxt file,is equal to set self.model.train() even in test time!
any ideas? thank you in advance.

@amoudgl
Copy link
Owner

amoudgl commented Apr 11, 2018

Hi, I am still working on the inference part from the model. test.py may need some revisions. As of now, train.py is ready, just need to train for enough iterations to get a good working model.

In the original source code, they are using the same model but they set do_train to false in the test code here. So, essentially they are disabling the dropouts while testing I believe, which is equivalent to pytorch model.eval().

@LiangXu123
Copy link
Author

sure, since set model.eval() is normal standard procedure in most case,but so far the result is totally wrong in evaluation phase,which make no sense,because the Dropout will handle the scale problem itself.

@amoudgl
Copy link
Owner

amoudgl commented Feb 7, 2019

Hi, I have fixed all these issues now. Please have a look at the updated README.

@amoudgl amoudgl closed this as completed Feb 7, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants