Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.Sign up
LMNN: optimization by bounding slack term #1449
This is an optimization for #1407. This one is focused on the following term, which appears during the computation of the LMNNFunction objective function and gradient:
We're already avoiding the computation of extra slack terms with the
The first step here would be deriving the bound, then we would need to probably find a way to cache
(Actually, it might be interesting to run a quick experiment and see, typically, how many points have
I ran some of the tests as you mentioned, and I think it is apparent that there's a lot of speedups.
There are comparatively very fewer triplets on which we actually need to calculate.
Great, this looks like it will be worthwhile to implement. Would you like to try to derive the bound, or would you like me to do that? I think it can be done based on the triangle inequality derivations in the lmnn_bounds.pdf file. I'm happy to do it if you'd prefer to work on other parts. :)
I just gave it a quick thought and I think the bound will be somewhat related to
referenced this issue
Jun 28, 2018
I believe that your derivation is correct here, so are you sure the implementation is correct? If you want to push your code I can take a look at it.
I actually think the derivation is a little easier if you start with
Another thing that can make this bound tighter is noting that the reverse triangle inequality is also bounded by zero---so we can say, e.g.,