Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

loss function #5

Closed
DdChew opened this issue Jan 29, 2021 · 7 comments
Closed

loss function #5

DdChew opened this issue Jan 29, 2021 · 7 comments

Comments

@DdChew
Copy link

DdChew commented Jan 29, 2021

Hi,dylan !
Train this network with my own data set,in the first few epochs(Lp is not introduced ),Lc fluctuates around 0.8, no obvious drop(Maybe this is normal?).
I saw that in your paper, the Lc training was conducted for 120 epochs.
Could you tell me the approximate value of Lc after you complete these 120 periods of training,this can give me a reference,thanks!!

@SergioRAgostinho
Copy link

SergioRAgostinho commented Jan 29, 2021

Your experience is consistent with mine while training on megadepth. The (train and validation) correspondence probability value after 120 epochs for that one is around 0.76. This is sufficient to recreate the results published on the paper.

@dylan-campbell
Copy link
Owner

Hi @DdChew, there should be a noticeable drop in the first few epochs. For megadepth, Lc should be dropping down from about 0.9 to 0.6 after 120 epochs on the validation set (or 0.99 to 0.5 on the training set). If it's fluctuating on your dataset, it's probably not learning well - perhaps try a larger learning rate. We actually found that we get better performance by increasing the learning rate from what we used in the paper, up to 1e-3 for MegaDepth.

@SergioRAgostinho, was this on the same (random) subset of megadepth that we used or a different one? There may be some variation there.

@dylan-campbell
Copy link
Owner

@DdChew: also, if your dataset has a high proportion of outliers it's likely to be harder to learn the inlier correspondences. In this case it'd be helpful to filter some of the outliers first.

@DdChew
Copy link
Author

DdChew commented Jan 30, 2021

@dylan-campbell ,thank you for sharing the data.
I will adjust hyperparameters like learning rate ,try to achieve this result.

@SergioRAgostinho
Copy link

@SergioRAgostinho, was this on the same (random) subset of megadepth that we used or a different one? There may be some variation there.

I was using the supplied preprocessing data. I just ran a training keeping everything default.

@DdChew
Copy link
Author

DdChew commented Feb 12, 2021

Hi !@dylan-campbell ,at line 117 of posses.py,maybe this correspondenceMatrix should be correspondenceMatrices?

@dylan-campbell
Copy link
Owner

Thanks @DdChew, well-spotted! Fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants