You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since we have nn.Dropout() layer in our model,during test,we should set the model to evaluation status by self.model.eval(), but in your code, test.py there is no such setting,so the Dropout layer is still working during test,thus we will get unstable output even from the same model and the same input.
But the worst part is, when we do not set self.model.eval(),we can get unstable but relativly correct output, but after you do set self.model.eval(),you will get even worse result,and in most of the time ,the result is totally wrong after a dozen of frames since the first frame is initialized by the groundtruth box.
I don't get it, and when I check the original GOTURN in caffe,I found the same setting,which means the train and test phase are using the same .prototxt file,is equal to set self.model.train() even in test time!
any ideas? thank you in advance.
The text was updated successfully, but these errors were encountered:
Hi, I am still working on the inference part from the model. test.py may need some revisions. As of now, train.py is ready, just need to train for enough iterations to get a good working model.
In the original source code, they are using the same model but they set do_train to false in the test code here. So, essentially they are disabling the dropouts while testing I believe, which is equivalent to pytorch model.eval().
sure, since set model.eval() is normal standard procedure in most case,but so far the result is totally wrong in evaluation phase,which make no sense,because the Dropout will handle the scale problem itself.
Since we have nn.Dropout() layer in our model,during test,we should set the model to evaluation status by self.model.eval(), but in your code, test.py there is no such setting,so the Dropout layer is still working during test,thus we will get unstable output even from the same model and the same input.
But the worst part is, when we do not set self.model.eval(),we can get unstable but relativly correct output, but after you do set self.model.eval(),you will get even worse result,and in most of the time ,the result is totally wrong after a dozen of frames since the first frame is initialized by the groundtruth box.
I don't get it, and when I check the original GOTURN in caffe,I found the same setting,which means the train and test phase are using the same .prototxt file,is equal to set self.model.train() even in test time!
any ideas? thank you in advance.
The text was updated successfully, but these errors were encountered: