Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add option to get PCA from LSA model #12

Open
TommyJones opened this issue Mar 21, 2016 · 5 comments
Open

Add option to get PCA from LSA model #12

TommyJones opened this issue Mar 21, 2016 · 5 comments

Comments

@TommyJones
Copy link
Owner

Add option to get PCA from LSA model

@sjankin
Copy link

sjankin commented Mar 21, 2016

Are you thinking about adding a correspondence analysis (CA) option as well? Arguably, CA could be tapping into underlying linguistic properties a bit better than PCA.

@TommyJones
Copy link
Owner Author

I hadn't thought about it. From this paper (
http://www.aclweb.org/anthology/W08-2007) it seems that this would work on
a term co-occurrence matrix, not a document-term matrix, right? I have no
problem implementing CA, though it depends on two things. I'll have to wait
for text2vec version 0.3 to be released (coming soon) to get the term
co-occurrence matrix. And I'd have to look into whether or not
implementations of some of the intermediate methods exist for sparse
matrices. (If not, I may be able to make them myself.)

If you want, you can open up an issue for me to look into this. I'll do my
best.

On Mon, Mar 21, 2016 at 2:03 PM smikhaylov notifications@github.com wrote:

Are you thinking about adding a correspondence analysis (CA) option as
well? Arguably, CA could be tapping into underlying linguistic properties a
bit better than PCA.


You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub
#12 (comment)

@sjankin
Copy link

sjankin commented Mar 22, 2016

You can set it up on DFM directly. That's how it's implemented in quanteda textmodel_ca function.

It's calling ca package. Another option is vegan package. Vegan is widely used in ecology and has more functionality.

Btw, quanteda is another higher-level framework implementation.

@TommyJones
Copy link
Owner Author

I will look at quanteda as well. I'm going to do benchmarks on SVD from irlba, RSpectra, and quanteda. I'll implement the version that seems fastest/most scalable. At the end of the day, all of LSA, PCA, and CA rely on SVD. So, it's just a matter of which one works best.

It seems that all three of textmineR, text2vec, and quanteda use the same data type. I am in the process of reworking textmineR to be a higher-level package, built on text2vec. @dselivanov has done an amazing job at creating a framework that works faster and is more scalable than any other I've seen (in any language), at least on a single machine. Maybe the quanteda maintainers might want to do the same?

My current plan (not written anywhere on GitHub) is to create wrappers for...

  • CTM and LDA based on EM from the topicmodels library (LDA based on gibbs sampling is already imported from the lda library)
  • STM from the stm library
  • LSA/PCA/CA (testing irlba, RSpectra, quanteda libraries)
  • GloVe from text2vec
  • Represent document clustering as a topic model where each document only contains a single topic
  • Others as they become available/I have time to understand them enough to build wrappers to put them in similar format.

The goal is to have a library that uses similar syntax and returns similar objects to get a wide range of topic models so users don't have to hunt them all down. My personal PhD research focuses on evaluation metrics for topic models. So, textmineR has that functionality as well.

@sjankin
Copy link

sjankin commented Mar 22, 2016

I think that sounds really good. And combination with text2vec is great.
Looking forward to see the development.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants