-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi, did you finish the whole process of GUTURN ??? #1
Comments
Hi! Thanks for showing your interest. I got busy with some other urgent research commitments while implementing this, so the project is still incomplete. Would surely update it with the proper README when it's done (hopefully by the end of November). |
@amoudgl Cool. Looking forward to this project. Thanks for your kind share. |
Dear @amoudgl, I tried to reproduce a working GoTurn model from this project. However, I think there is a problem with the data generator as it does not converge beyond a certain loss (for me that loss is 6.0). It begins at loss=12.xx and converges to 6.0xx. However, I believe it should converge beyond that as well. I think it might be an issue with the dataset generator. For example, in the original C++ implementation, I see that the bounding boxes are squeezed into (0, 1) and then scaled by a factor of 10 to be within a (0, 10) range. However, you don't do that kind of normalization. You do the context scaling (i.e. doubling the size of the box by kContextFactor=2, then re-center the bounding box). However, the final bounding box is not a ratio of the original image dimension. I have been tinkering around with your implementation for a couple of days and trying to get it work. What really confuses me is the bounding box pre-processing. Did you get it to work on your end? Best, |
Hi @devyhia, I do normalize the values of bounding boxes in my training code here. However, your observation is correct that loss doesn't converge beyond a certain value. I believe that it is happening due to huge difference between the model capacity and dataset size on which we are training. We are using AlexNet for training on ALOV dataset (just 16k images), which is actually an overkill. However, an argument can still be given that if training size is small, model should overfit and loss should still converge to really low values which is clearly not happening here. I experimented a little and found out that it is happening due to stochastic behaviour of dropout in the model. If you try removing the dropout layers, you should get the desired results (but model would still be overfitting). Correct me if I am wrong somewhere. Also, I recently updated the code to test GOTURN on OTB formatted sequences. The basic framework of training and testing is ready, only ImageNet training is left. Let me know if you face any more issues. Thanks for your review. |
@devyhia Thanks for pointing out the scaling issue. Just fixed it. |
@amoudgl The dropout is a true observation as well. I could make the model converge with a dropout of 0.1 (rather than the previous p=0.5). By the way, I forked this project and added few nice things for the training. For examples:
I got the model to converge somewhere around loss=1.xx. Also, the visualization show clearly that it is learning something meaningful. By the way, I also tried to overfit the model using both SGD and Adam optimizer. Adam converges much faster (i.e. in terms of number of epochs) than SGD— which is a bit expected. However, it is slower. On a GPU (and batch size=1), SGD yields 100 iterations in 5 seconds where Adam yields the same 100 iterations in 12 seconds. By the way, if you are trying to re-produce a working version of Held's paper, we could collaborate on it. I already have a fork of your repository. You could check it out here: https://github.com/devyhia/pygoturn/ |
@devyhia Great effort, indeed. Feel free to send pull requests for these additional functionalities, just make sure that your commit names are neat so that we can have a clean history of commits for the project. Yes, we can surely collaborate. I have sent you an invite on Slack (using your gmail address), let's move forward with the detailed discussion there. |
@amoudgl Could I add your team? Recently I also reimplement this project and have a lot of interests. My account is Thank you. |
@sydney0zq Have you worked on implementing motion smoothness model as mentioned in the GOTURN paper? If no, can you work on it? Because apart from this part, we have already divided the work and started working on it. |
@amoudgl I am not familiar with the smooth motion model, but I am current doing some research on GOTURN to improve its performance(The result is not good on VOT/OTB). So maybe I have no time for it. Have you prepared code for testing on benchmarks such as OTB or VOT? If not, maybe I could make some commits. |
No problem @sydney0zq. We are currently working on completing the training part first. The testing code works for the OTB-formatted sequences but we don't have any proper pretrained model yet. Maybe, after we are done with the training and all, we would let you know and then, you can work with us on benchmarking and integrating our PyTorch pretrained model into VOT and OTB. |
@amoudgl @sydney0zq I have a fairly working model. However, it is not as robust as Held's original model (or, those trained on Caffe). It works for some VOT videos (like the ball video in VOT 2014). @sydney0zq If you need this, I could send it to you. |
@devyhia Yep, could you please share it with me, along with your code? I am still training on just ALOV. I will mention you as soon as I get a reasonable result. |
@devyhia Hi, I am now working with a pytorch project that may need a tracking model. So would you send a working model of GUTURN to me ? Just email me. That may be very helpful, thanks a lot! |
@amoudgl Hi! Thank you for your wonderful pygoturn! I wonder whether it is finished now? |
Hi! I fixed training and testing scripts recently. Training is still going on and I'm doing a few experiments to further fix errors (if any). I tested some of the intermediate training checkpoints on OTB sequences and they look fine. I'd try to release the pretrained model as soon as possible. To estimate, it may take a week to release the final trained model. Thanks for your comment! |
@amoudgl Add oil! 😄 |
Hi! I finished everything, finally. Please have a look at the updated README. Thank you all! |
as the title shows, did you finish the whole process of GUTURN ??? I want try this code, I love pytorch ...
The text was updated successfully, but these errors were encountered: