New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Variable degree of freedom is not supported for 1D #78
Comments
I finally went ahead and implemented this in my fork. I tried it on a simple example, but not yet on MNIST. Can you please try it out and if it works as expected I will merge to the main repo. |
Here's a simple test:
|
Great! I will give it a try during next week. |
Sorry, was a bit overwhelmed with stuff. Hoping to try it out this week... |
I have finally gave it a try. I tried embedding full MNIST in 1D with various value of df. It seems to work as expected, so I guess you can go ahead and merge into master! Great that you found time to implement it. One thing I was surprised to see, is that I did not observe any effect until I decreased df much below .5. For 2D, df=.5 was already producing a very strong effect. And the same but rescaled horizontally: |
Another thing, is that digits don't split like they do in 2D, but rather the gaps between digits increase... |
Also, didn't in 2D the embedding typically grow in size with decreasing df? Here the size decreases when df decreases below 1... |
Hey, what do you think? Are you planning to merge this branch? Or do you want to investigate anything first? |
Thanks for all the extensive testing, @dkobak. These are interesting differences between the 2D and 1D. I don't have an explanation for them, but I don't think it's due to a bug. I went ahead and merged the fork. |
Cool. I will add my above tests to the example Python notebook. |
Added to the example notebook. |
Just realized that our variable degree of freedom is not supported for 1D visualizations. Let me tell you why it could be cool to implement when we talk later today :-)
By the way, here is the 1D MNIST tSNE that I am getting with my default settings
fast_tsne(X50, learning_rate=X50.shape[0]/12, map_dims=1, initialization=PCAinit[:,0])
where X50 is 70000x50 matrix after PCA:
It's beautiful, but 4d and 9s (salmon/violet) get tangled up. I played a bit with the parameters but couldn't make them separate fully. It seems 1D optimisation is harder than 2D (which is no surprise).
The text was updated successfully, but these errors were encountered: