Update LMNN low-rank tests to use all three optimizers.#1476
Update LMNN low-rank tests to use all three optimizers.#1476rcurtin merged 1 commit intomlpack:masterfrom
Conversation
Also actually perform the checking.
manish7294
left a comment
There was a problem hiding this comment.
Right, the approach seems reasonable for handling this condition, but I suspect there may still be some point of times where this may randomly fail (just in case none of the trials succeed), though that should be very rare.
Thanks for taking this up :)
|
|
||
| // We keep the tolerance very high. We need to ensure the accuracy drop | ||
| // isn't any more than 10%. | ||
| success = ((acc1 - acc2) <= 10.0); |
There was a problem hiding this comment.
Why we are only considering accuracy drop case? I think it accuracy may increase as well and in that case this will always result in success no matter how much the difference. So, shouldn't we use abs(acc1 - acc2) here. What do you think?
There was a problem hiding this comment.
If accuracy increases, even by a lot, I don't see this as an issue at all. Mostly I want to check that accuracy doesn't drop way off, because to me that could indicate a big bug. (Like if accuracy goes from 50% to 0%, for instance.)
There was a problem hiding this comment.
hmm, Then it's all right. I think it's good to merge :)
|
Ok, in this case I'll go ahead and merge this in 3 days. |
@manish7294 ---take a look at this and tell me what you think.
Basically all I have done is reworked the low-rank LMNN tests so that they can be included in the test suite. It's true that sometimes the low-rank LMNN may randomly perform poorly (or at least very differently) based on initial starting point, so I've made it so that it can take three trials (or up to five with BBSGD since it seems to perform worse).