Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

To Do: #6

Open
4 of 6 tasks
amritbhanu opened this issue Oct 18, 2016 · 4 comments
Open
4 of 6 tasks

To Do: #6

amritbhanu opened this issue Oct 18, 2016 · 4 comments

Comments

@amritbhanu
Copy link
Contributor

amritbhanu commented Oct 18, 2016

For Research

  • Does stability help classification? Tuned and untuned? Multigoal fscore and jaccard score
  • IMP: Ldaga paper reproduction (https://dibt.unimol.it/reports/LDA-GA/)
  • And topic matching with lda and with jist brute and force 7,10 words

For LN

  • Tune svm and lda, just for fscore (kernel, C, degree, learning rate)
  • Lda+word vector which will have topic word features (Part of my research as well)
  • Mutate Tags by x% to see what error it throws. (Heat Map with tags and topics for different datasets).

Conclusion:

  • Feature engineering, LDA features are much better for text mining rather than tf, tfidf. (Can be a paper)
@amritbhanu
Copy link
Contributor Author

amritbhanu commented Oct 18, 2016

  • tfidf weight + kmeans clustering (Raymond Mooney (Learning Scripts for Text Understanding with Recurrent Neural Networks) seminar title)
  • lda features + kmeans

LN:

@amritbhanu
Copy link
Contributor Author

amritbhanu commented Oct 20, 2016

PAPERS WHICH CAN BE MADE OUT OF IT:

  • LDA + Classification
  • LDA features compared with tf , tfidf (Compared with other text mining results) and it performs better with DE
  • LDADE results compared to LDAGA

@amritbhanu amritbhanu modified the milestone: Backlogs and Papers Oct 20, 2016
@timm
Copy link

timm commented Oct 21, 2016

"LDA features compared with tf , tfidf (Compared with other text mining results) and it performs better with DE" is probably a small section in external validity

lets see what icse says but i think you have one paper showing that stability is useful for supervised and unsupervised learning

i long to want your stats tests on f2, f1, jacard before and after tuning.

@amritbhanu
Copy link
Contributor Author

amritbhanu commented Nov 8, 2016

  • magic parameters tune. New set of examples they want surely classified and not surely classified.
    • waiting for results

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants