Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated K-means clustering for Nystroem #3126

Closed
wants to merge 4 commits into from

Conversation

nateyoder
Copy link

Because I wanted to try K-means clustering as the basis for Nystroem approximation and it appeared as though pull request #2591 might be stalled I created a slightly modified version. I also tried to address @amueller comment about the effectiveness of the method by including it in the plot_kernel_approximation example and @dougalsutherland comment concerning the possible singularity of the sub-sampled kernel matrix using the same approach as scipy does in pinv2.

Since it is my first commit to the project (hopefully the first of many) any feedback or suggestions you have would be appreciated.

@coveralls
Copy link

Coverage Status

Coverage remained the same when pulling 0b139b4 on nateyoder:kmeans-nystroem into 48e2b13 on scikit-learn:master.

@nateyoder nateyoder changed the title Implemented Updated K-means clustering for Nystroem May 2, 2014
@amueller
Copy link
Member

amueller commented May 3, 2014

Hi @nateyoder.
Thanks for tackling this. Could you maybe post the plot from the example?
Have you experimented with some datasets and seen an improvement?

Cheers,
Andy

@@ -35,9 +35,15 @@ Nystroem Method for Kernel Approximation
The Nystroem method, as implemented in :class:`Nystroem` is a general method
for low-rank approximations of kernels. It achieves this by essentially subsampling
the data on which the kernel is evaluated.
The subsampling methodology used to generate the approximate kernel is specified by
the parameter ``basis_method`` which can either be ``random`` or ``clustered``.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would call it kmeans instead of clustered, to be more specific.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe also basis_sampling or basis_selection?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great suggestions. They are incorporated in the new version.

@nateyoder
Copy link
Author

As far as performance it seems to help a bit, but not quite as much as I had hoped. I think the results would be bigger if the random selection method happened to select an outlier as part of the basis sampling set but didn't try different random seeds to make that occur.

accuracyandtrainingtimeforkernelapproximationmethods

@coveralls
Copy link

Coverage Status

Coverage remained the same when pulling 5f313f8 on nateyoder:kmeans-nystroem into 48e2b13 on scikit-learn:master.

@nateyoder nateyoder closed this May 12, 2014
@nateyoder nateyoder deleted the kmeans-nystroem branch May 12, 2014 18:19
@nateyoder nateyoder restored the kmeans-nystroem branch May 12, 2014 18:20
@nateyoder
Copy link
Author

Sorry I accidentally deleted the branch and I think doing this closed the issue. Sorry!!

@nateyoder nateyoder reopened this May 12, 2014
@amueller
Copy link
Member

Have you tried it on a different dataset? This above is digits, right? Maybe try MNIST? Or is there some other dataset where RBF works well?

@amueller
Copy link
Member

I think this should help but I also think we should make sure that it actually does ;)

@ogrisel
Copy link
Member

ogrisel commented May 13, 2014

Have you tried it on a different dataset? This above is digits, right? Maybe try MNIST? Or is there some other dataset where RBF works well?

You could also try on Olivetti faces with RandomizedPCA preprocessing: http://scikit-learn.org/stable/auto_examples/applications/face_recognition.html

To try on a bigger dataset you can use LFW instead of Olivetti.

@nateyoder
Copy link
Author

Sounds great guys thanks for the suggestions. I'll give them a shot this week and post the results.

Also I noticed my build failed but it failed because of errors in OrthogonalMatchingPursuitCV. Do you guys know if this an intermitant test or something I should look into?

@ogrisel
Copy link
Member

ogrisel commented May 15, 2014

The travis failure is unrelated, you can ignore it.

@nateyoder
Copy link
Author

Sorry for the long layoff guys.

Finally got a chance to run amueller's MINST example with k-means and random. As the graph shows k-means does show some minor improvement but nothing big. However, since it seems to almost always be a little better in the examples I tried it seems like it might still be worth adding it?

I briefly tried on Olivetti but I think because of the limited amount of faces saw a lot of variance in the output and didn't really get anything useful other than k-means definitely isn't a silver bullet. I didn't have time to look into LFW.

minst_example

@kastnerkyle
Copy link
Member

It seems consistent from the little I have seen thus far - I will try to run some tests as well. Looks pretty nice!

@ogrisel
Copy link
Member

ogrisel commented Jul 18, 2014

Thanks for the bench on mnist. It would be great to run the same on lfw and
covertype.

@djsutherland
Copy link
Contributor

At first these results seemed at odds to me with the MNIST line in Table 2 of Kumar, Mohri and Talwalkar, Sampling Methods for the Nyström Method, JMLR 2012. But actually, that table is showing the kernel reconstruction "accuracy" || K - K_k ||_F / || K - \tilde{K}_k ||_F * 100}, where K_k is the optimal rank-k reconstruction (the truncated SVD), and \tilde{K}_k is the rank-k Nyström approximation. I guess the kernel isn't as well-approximated by the uniform reconstruction, but it's still good enough to do classification with. Might be good to make sure that's the case.

Also, it might be better to use kmeans++ initialization rather than random; did you try that?

@nateyoder
Copy link
Author

Brief update. I ran MINST again to compare "better" clustering with k-means [KMeans++ initialization, max_iter=300, and n_init=10] vs k-means as suggested in the literature ['random' initialization, max_iter=5, n_init=1] vs random Nystroem. As shown below the much more time intensive clustering has almost no impact on the classification performance while significantly increasing the time needed to train the model.

kmeans_vs_k

I also did the same on LFW and the results are below. In this case k-means appears to little to no consistent improvement over random selection. If you are interested I used the parameters found in http://nbviewer.ipython.org/github/jakevdp/sklearn_scipy2013/blob/master/rendered_notebooks/05.1_application_to_face_recognition.ipynb other than doing my own RBF grid search to find the optimal RBF parameters.

flw_kmeans_vs_random

I'll try to do the covertype test later this week if I get time and you guys think it is still needed.

@ogrisel
Copy link
Member

ogrisel commented Jul 29, 2014

Can you please rebase your branch on master and try with MinibatchKMeans? This might be master to converge while giving good enough centroids.

@mth4saurabh
Copy link
Contributor

EDIT - I will post plots and numbers soon.
@amueller @ogrisel , I extracted and used portion from class MiniBatchKMeans(On top of work done in this PR), as expected we can improve on time but performance takes hit for low dimensions.

@amueller
Copy link
Member

hm... this actually looks good. @mth4saurabh any chance you are still interested in working on this?

@mth4saurabh
Copy link
Contributor

@amueller : sure, would love to; will start on monday.

@amueller amueller added the Needs Benchmarks A tag for the issues and PRs which require some benchmarks label Aug 5, 2019
Base automatically changed from master to main January 22, 2021 10:48
@haiatn
Copy link
Contributor

haiatn commented Aug 25, 2023

Since #1568 is marked as completed after no evidence that K-means would be a better way I think we can close this, and if someone finds a better method they could open a new PR for solving this issue #4982

@adrinjalali adrinjalali closed this Mar 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Needs Benchmarks A tag for the issues and PRs which require some benchmarks
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

10 participants