-
Notifications
You must be signed in to change notification settings - Fork 234
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Algorithm wish list #13
Comments
see issue #13 for the current list
Hello, any tips or further documentation on how to go about doing this? And which one of these would be highest priority? |
I don't personally have a priority, but if you want to rank them you could look at their citation counts on Google Scholar. That should give you an idea of what's more popular in the literature. I would start with one that seems easy to implement, though. Each method should have its own Python file in the |
Once you have something that works, add a class to test/metric_learn_test.py which exercises the new algorithm. You should also verify that your implementation matches the reference code, if possible. |
Thanks for all the details! The paper by Xing, et al is I think the most popular (and one of the first in Metric Learning) algorithm/paper among the list, I'll give it a shot. |
Hey @perimosocordiae can I take up the MLKR implementation? I was thinking of using the |
@dsquareindia Sure, MKLR is a good choice. The |
KISSME metric learning would be a nice addition (no optimization). |
@aschuman thanks, I've added it to the list. |
hi @perimosocordiae |
@anirudt Thanks, added. @RishabGoel Go for it! I look forward to your pull request. |
Hello, just wanted to know where the short form PGDM came up - I can't seem to find it in the paper linked (which is the paper just called "Distance metric learning, with application to clustering with side-information"), by Xing et al. |
@bhargavvader Probably the wrong link. The metric is a non probabilistic one in the link. But find the PRobabilistic one in the following link: |
@bhargavvader, @RishabGoel: yep, that's correct. I pulled the citation from the survey paper without looking too closely at it. I'll fix the link now. |
The method by Xing et al. (2003) is also sometimes referred to as "MMC", e.g., in the LMNN paper by Weinberger et al. I guess, "MM" stands for "Mahalanobis Metric Learning" and "C" for "Clustering". |
@perimosocordiae I would like to take up |
@souravsingh Great! Feel free to open a PR whenever you'd like to get feedback. I just noticed the paper links for those methods are broken, so if you find working links to the papers let me know. |
@perimosocordiae I think these are the links to the papers- DistBoost- http://www.cs.huji.ac.il/~daphna/papers/distboost-icml.pdf
|
@souravsingh thanks, I've updated the list with your links. |
Another suggestion: the triplet-based method proposed in the paper below (heavily cited, very useful for information retrieval applications): Given that we currently do not have any triplet-based weakly supervised approach I think we could make this a priority. A GD or SGD implementation should not be very difficult |
How about add the extension to the existed LSML algorithm, in the original paper and code, there is the extension of the vanilla algorithm called spLSML, which seems to be is just to add a L1 norm of the learned Mahalanobis matrix in the loss function, thus with a novel gradient update method, it should be quick to implement. |
Methods that we haven't implemented yet, but would like to. In no particular order:
The text was updated successfully, but these errors were encountered: