Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could u explain how to define the perplexity? #34

Closed
xuanzhaopeng opened this issue Aug 24, 2016 · 1 comment
Closed

Could u explain how to define the perplexity? #34

xuanzhaopeng opened this issue Aug 24, 2016 · 1 comment

Comments

@xuanzhaopeng
Copy link

I found if If I define the perplexity small than 0, then the K always be 0, because you define
int K = (float)perplexity * 3, so K always be 0.

If I define the perplexity > 0, then I will get segmentation fault.....(because the sizeof (distances) != K)

I'd like to know, this source code still could be use? or you already don't use it any more ....

@lvdmaaten
Copy link
Owner

How should I set the perplexity in t-SNE?

The performance of t-SNE is fairly robust under different settings of the perplexity. The most appropriate value depends on the density of your data. Loosely speaking, one could say that a larger / denser dataset requires a larger perplexity. Typical values for the perplexity range between 5 and 50.

What is perplexity anyway?

Perplexity is a measure for information that is defined as 2 to the power of the Shannon entropy. The perplexity of a fair die with k sides is equal to k. In t-SNE, the perplexity may be viewed as a knob that sets the number of effective nearest neighbors. It is comparable with the number of nearest neighbors k that is employed in many manifold learners.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants