Skip to content

Update LMNN low-rank tests to use all three optimizers.#1476

Merged
rcurtin merged 1 commit intomlpack:masterfrom
rcurtin:lmnn-test-lr-fix
Aug 2, 2018
Merged

Update LMNN low-rank tests to use all three optimizers.#1476
rcurtin merged 1 commit intomlpack:masterfrom
rcurtin:lmnn-test-lr-fix

Conversation

@rcurtin
Copy link
Copy Markdown
Member

@rcurtin rcurtin commented Jul 27, 2018

@manish7294 ---take a look at this and tell me what you think.

Basically all I have done is reworked the low-rank LMNN tests so that they can be included in the test suite. It's true that sometimes the low-rank LMNN may randomly perform poorly (or at least very differently) based on initial starting point, so I've made it so that it can take three trials (or up to five with BBSGD since it seems to perform worse).

Also actually perform the checking.
Copy link
Copy Markdown
Member

@manish7294 manish7294 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, the approach seems reasonable for handling this condition, but I suspect there may still be some point of times where this may randomly fail (just in case none of the trials succeed), though that should be very rare.

Thanks for taking this up :)


// We keep the tolerance very high. We need to ensure the accuracy drop
// isn't any more than 10%.
success = ((acc1 - acc2) <= 10.0);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why we are only considering accuracy drop case? I think it accuracy may increase as well and in that case this will always result in success no matter how much the difference. So, shouldn't we use abs(acc1 - acc2) here. What do you think?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If accuracy increases, even by a lot, I don't see this as an issue at all. Mostly I want to check that accuracy doesn't drop way off, because to me that could indicate a big bug. (Like if accuracy goes from 50% to 0%, for instance.)

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, Then it's all right. I think it's good to merge :)

@rcurtin
Copy link
Copy Markdown
Member Author

rcurtin commented Jul 31, 2018

Ok, in this case I'll go ahead and merge this in 3 days.

@rcurtin rcurtin merged commit 2788faa into mlpack:master Aug 2, 2018
@rcurtin rcurtin deleted the lmnn-test-lr-fix branch August 2, 2018 13:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants