-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training speed and results #6
Comments
Thanks for your attention. I have no idea about the speed. It is similar to my training process. I guess the detector loss causes this phenomenon The default hyper-parameters may not work well, you have to adjust them several times to get better results. The latest version adds two important variables, i.e., positive_dist, negative_dist, to help you fine tune the model. The parameters lambda_d, lambda_loss, positive_margin and negative_margin are the most decisive ones. You need to adjust them to make positive_dist and negative_dist as small as possible. (The weight i released in the latest version can achieve 0.66 acc. on hpatches_v) |
Thx much with your reply! I will try finetune these parameters! 0.66 acc for hpatches-v is your best result? Also, is the result use pretrain model(superpoint_bn.pth)? |
I trained the model without any pre-trained model, and it cost me several days to achieve this performance. Training the model really needs experience and tricks. And i'm failed to find a better hyper-parameters that can directly obtain a good model. 0.66 may not be the best, but this is the best model i can get right now. |
Got it! thx again! |
my result never reach 0.60 for hpatches-v when try several hyper-parameters, can you share hyper-parameters with superpoint_bn.pth? because I inference this model, the result hpatches-v is 0.66. Also if you get better result that can share the experience. I will so appreciate! |
If you want to run with superpoint_bn.pth, remember to set |
Actually, I get 0.66 result when set eps=1e-3 and momentum=0.1 for inference. But when I training without any pre-trained model, how to set those hyper-parameters to get this result? I try to set eps=1e-3, lambda_d=250 and lambda_loss=10 or other val, fix lr... It can't get this result |
|
Thanks! Ihave a last question about README.md -> Steps. Is that means training model need two stage? The firsts stage need 2.Commet the following lines in loss.py and Set base_lr=0.01 , than use trained models train stage two: base_lr=0.001 and fix hyper-parameters: |
If I use your last version, should I train how many times? |
Also, converted from rpautrat's superpoint didn't have same result, Is possible have bug like this: rpautrat/SuperPoint#117 |
Set |
Thanks for your answer! |
Hi litingsjj, I found that the parameters for photometric are different from rpautrat's SuperPoint. This may affect the training performance.
And,i'm checking to see if there are any other problems. |
great! what's the results when you update the *.yaml? Also, I can't reproduce your result for now |
I'm confused about your train strategy |
a. The magic point trained by this repo. is different from rp's superpoint. It seems that our magic point usually generate more keypoints than sp's mp. It may be caused by
This may be unnecessary.
|
Moreover, I can not achieve similar training performances with these improvements, keep on checking... |
Hi, i uncommented the following 3 lines and set lr=0.001, and train superpoint on the coco dataset generated by rpautrat's magicpoint model. The performance on hpatches-v is 0.698 ! much similar to rpautrat's model. I have update the repo.
However, there may still have some problems training the magicpoint, it usually produce more key points than rpautrat's model. The may affect the final results |
if you worried about that, maybe we can use rpautrat's model to export coco dataset to train superpoint. Also, the dataset(coco2017) is different with rpautrat used(coco2014). And can i use your last version to reproduce your result? |
The performance on hpatches-v get 0.725! |
Great! Would you to share your training method ? This may help lots of people who follow this repo. |
I trained magic point with rpautrat's project to get coco gt points, then use your version to train superpoint. |
OK, thinks |
Hello,sorry to bother you, rpautrat's project generate the lables is .npz while this project generate lables is .npy,how do you solve this question? |
Hi, it is easy to convert *.npz to .npy. And remeber to zoom the coco image to 240320 by function |
Is that mean rpautrat's project's image size is not 240320 and this project's image size has been resized to 240320? |
Though rpautrat's lables are .npz,it is one image to one npz file,so it seems easy to convert. |
hi, could you email me the coco gt points you got in leon_wu6@163.com? |
Sorry to bother you, I have two question about this project. for training work, I use multi gpus, the speed about 1.79it/s in first two epoch, it takes one day. After that, the speed is so fast, I don't know why this happened. Also the result, the detections repeatability is more better than rpautrat/Superpoint's, but the descriptors: hpatches-i is 0.90. hpatches-v is 0.55. Compare with rpautrat/Superpoint's, the result is not good.
The text was updated successfully, but these errors were encountered: