Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluating dimensionality reduction? #6

Open
DataWaveAnalytics opened this issue Oct 10, 2017 · 13 comments
Open

Evaluating dimensionality reduction? #6

DataWaveAnalytics opened this issue Oct 10, 2017 · 13 comments
Labels
Good Reads Issues that discuss important topics regarding UMAP, that provide useful code or nice visualizations

Comments

@DataWaveAnalytics
Copy link

Hello Leland,

Thank you for sharing this new algorithm.
I have a question regarding evaluation measures of dimensionality reduction methods. I'm aware of trustworthiness and continuity, but I'm looking for measures that can handle large datasets.

I found the paper "Scale-independent quality criteria for dimensionality reduction" which is an alternative quality measure, but it is still for small datasets.

How are you evaluating umap against other approaches at the moment?

@lmcinnes
Copy link
Owner

lmcinnes commented Oct 10, 2017 via email

@DataWaveAnalytics
Copy link
Author

Thanks for the hints, Leland. I will try to implement my version of the metrics for large datasets or, at least, a methodology to do it with what is available.

BTW, there is a new MNIST-like dataset called "fashion MNIST" if you want to test (released Aug. 2017). They argue we should move away from MNIST to test new algorithms (see the link for details).

@lmcinnes
Copy link
Owner

lmcinnes commented Oct 11, 2017 via email

@DataWaveAnalytics
Copy link
Author

I got these visualizations of fashion mnist with t-SNE and LargeVis (trains+test=70000 using labels only for the colors). It looks like LargeVis looks better, but then when I evaluate both of them using a kNN classifier (with 10 runs for each training % and using the mean) t-SNE is a better embedding for this task.(Should I trust my eyes or the numbers?)

Do you have a visualization using umap that you can share?

fashion_mnist_tsne
fashion_mnist_largevis
k50_fashion_mnist
k100_fashion_mnist
k500_fashion_mnist

@lmcinnes
Copy link
Owner

I can dig up a visualization I think. It is closer to LargeVis in appearance, but still a little different. As to what to trust; I think you have to trust both to some extent. t-SNE does do some thing right, and those curves do matter, so despite the clearly better appearance of LargeVis there seems to be something deceptive going on underneath it all.

@lmcinnes
Copy link
Owner

lmcinnes commented Oct 14, 2017

Here's what UMAP did:

image

As I said, more similar to LargeVis. It is worth noting that UMAP has kept some of the groups together where LargeVis split them into multiple blobs (the royal blue category in your LargeVis plot, equivalent to the pale purple in the UMAP plot, for example). I wonder if that may effect the kNN-classifier accuracy?

I also find the banding of three classes that all three algorithms reproduced quite interesting; the fact that the three algorithms all generated it gives me confidence that it probably isn't an artifact of the reduction but actually a property of the data, but if so ... that's quite intriguing.

@lmcinnes
Copy link
Owner

I tweaked the min_dist parameter (which defines how closely the embedding should pack points together in the embedded space) to compress things less (and hence resemble the t-SNE result more) and got this:

image
Still very similar (up to rotation) but less aggressive in separating clusters and showing a little more of the interconnected structure. I believe this would almost certainly embed a whole lot better in 3 or 4 dimensions.

@DataWaveAnalytics
Copy link
Author

Thank you for sharing, UMAP is doing great (visually). I definitely need to study the details of your implementation. Are you planning to submit a preprint soon? (just trying to decide if I wait for your document or I should jump to implementation instead)

I believe a less aggressive separation would lead to better k-NN classifiers performance, but we should evaluate with trustworthiness and continuity anyway (or others like the scale-independent criteria).

@lmcinnes
Copy link
Owner

I'm struggling to find time to short up all the math and get the preprint done (because I really want sound explanations of why things work, which means getting good explanations well hammered out). It will be a little while yet unfortunately. The code may be a little hard to follow, but check the numba branch as that has code that is, perhaps easier to wrap one's head around. The preprint will probably help rather a lot though. Thanks for the extra reminder that I really need to get to work on getting that done.

@lionely
Copy link

lionely commented Jun 7, 2019

Is it too much to ask for a code example displaying an implementation of "trustworthiness" and "continuity"? I'm trying to evaluate the quality of dimensionality reductions acquired from t-SNE.

Any help would be greatly appreciated!

@lmcinnes
Copy link
Owner

lmcinnes commented Jun 8, 2019

@sleighsoft sleighsoft added the Good Reads Issues that discuss important topics regarding UMAP, that provide useful code or nice visualizations label Sep 15, 2019
@sleighsoft sleighsoft changed the title [Question] Evaluating dimensionality reduction? Evaluating dimensionality reduction? Sep 17, 2019
@hoangthienan95
Copy link

hoangthienan95 commented Aug 12, 2020

Are "Trustworthiness" and "continuity" still the two best measures for evaluating the embedding? In validation.py, I see the parameter max_k in the function trustworthiness_vector. How do I choose this parameter?

Also, kind of related, you said there are some guidance on how many n_components we should choose. Any update on that? Without the metric above, I also don't know how to optimize for n_components and other parameters. TIA!

@cglopezespina
Copy link

Is trustworthiness a good method for selection of UMAP parameters. It is mentioned above, but I have not seen it in any other resource.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Good Reads Issues that discuss important topics regarding UMAP, that provide useful code or nice visualizations
Projects
None yet
Development

No branches or pull requests

6 participants