Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse files

Added biblio entries

  • Loading branch information...
commit ba4e4dc799df685a2de24adeb27734ae14968e50 1 parent 85fa833
@pranjalv123 pranjalv123 authored
Showing with 60 additions and 13 deletions.
  1. +11 −12 egpaper_final.tex
  2. +49 −1 fpbib.bib
View
23 egpaper_final.tex
@@ -6,6 +6,7 @@
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{amssymb}
+\usepackage{url}
% Include other packages here, before hyperref.
@@ -55,9 +56,9 @@ \section{Introduction}
\section{Previous Work}
-We set out to replicate Pang’s work from 2002 on using classical knowledge-free supervised machine learning techniques to perform sentiment classification. They used the machine learning methods (Naive Bayes, maximum entropy classification, and support vector machines), methods commonly used for topic classification, to explore the difference between and sentiment classification in documents. Pang cited a number of related works, but they mostly pertain to classifying documents on criteria weakly tied to sentiment or using knowledge-based sentiment classification methods. We used a similar dataset, as released by the authors, and did our best to use the same libraries and pre-processing techniques.
+We set out to replicate the work of Pang et al.\cite{Pang} from 2002 on using classical knowledge-free supervised machine learning techniques to perform sentiment classification. They used the machine learning methods (Naive Bayes, maximum entropy classification, and support vector machines), methods commonly used for topic classification, to explore the difference between and sentiment classification in documents. Pang cited a number of related works, but they mostly pertain to classifying documents on criteria weakly tied to sentiment or using knowledge-based sentiment classification methods. We used a similar dataset, as released by the authors, and did our best to use the same libraries and pre-processing techniques.
-In addition to replicating Pang’s work as closely as we could, we extended the work by exploring an additional dataset, additional preprocessing techniques, and combining classifiers. We tested how well classifiers trained on Pang’s dataset extended to reviews in another domain. Although Pang limited many of his tests to use only the 16165 most common ngrams, advanced processors have lifted this computational constraint, and so we additionally tested on all ngrams. We use a newer parameter estimation algorithm called Limited-Memory Variable Metric (L-BFGS) for maximum entropy classification. Pang used the Improved Iterative Scaling method. We also implemented and tested the effect of term frequency-inverse document frequency (TF-IDF) on classification results.
+In addition to replicating Pang’s work as closely as we could, we extended the work by exploring an additional dataset, additional preprocessing techniques, and combining classifiers. We tested how well classifiers trained on Pang’s dataset extended to reviews in another domain. Although Pang limited many of his tests to use only the 16165 most common ngrams, advanced processors have lifted this computational constraint, and so we additionally tested on all ngrams. We use a newer parameter estimation algorithm called Limited-Memory Variable Metric (L-BFGS)\cite{Liu} for maximum entropy classification. Pang used the Improved Iterative Scaling method. We also implemented and tested the effect of term frequency-inverse document frequency (TF-IDF) on classification results.
\section{The User Review Domain}
For our experiments, we worked with movie reviews. Our data source was Pang’s released dataset (http://www.cs.cornell.edu/people/pabo/movie-review-data/) from their 2004 publication. The dataset contains 1000 positive reviews and 1000 negative reviews, each labeled with their true sentiment. The original data source was the Internet Movie Database (IMDb).
@@ -69,7 +70,7 @@ \section{The User Review Domain}
\section{Machine Learning Methods}
\subsection{The Naive Bayes Classifier}
-The Naive Bayes classifier is an extremely simple classifier that relies on Bayesian probability and the assumption that feature probabilities are independent of one another.
+The Naive Bayes classifier\cite{Manning} is an extremely simple classifier that relies on Bayesian probability and the assumption that feature probabilities are independent of one another.
Baye's Rule gives:
$$
P(C | F_1, F_2, \ldots, F_n)
@@ -100,18 +101,16 @@ \subsection{The Naive Bayes Classifier}
While the Naive Bayes classifier seems very simple, it is observed to have high predictive power; in our tests, it performed competitively with the more sophisticated classifiers we used. The Bayes classifier can also be implemented very efficiently. Its independence assumption means that it does not fall prey to the curse of dimensionality, and its running time is linear in the size of the input.
-[http://nlp.stanford.edu/IR-book/html/htmledition/naive-bayes-text-classification-1.html]
-
\subsection{The Maximum Entropy Classifier}
-Maximum Entropy is a general-purpose machine learning technique that provides the least biased estimate possible based on the given information. In other words, it is maximally noncommittal with regards to missing information” [src]. Importantly, it makes no conditional independence assumption between features, as the Naive Bayes classifier does.
+Maximum Entropy is a general-purpose machine learning technique that provides the least biased estimate possible based on the given information. In other words, ``it is maximally noncommittal with regards to missing information'' \cite{Jaynes}. Importantly, it makes no conditional independence assumption between features, as the Naive Bayes classifier does.
Maximum entropy’s estimate of $P(c|d)$ takes the following exponential form:
$$P(c|d) = \frac{1}{Z(d)} \exp(\sum_i(\lambda_{i,c} F_{i,c}(d,c)))$$
-The $\lambda_{i,c}$s are feature-weigh parameters, where a large $\lambda_{i,c}$ means that $f_i$ is considered a strong indicator for class $c$. We use 30 iterations of the Limited-Memory Variable Metric (L-BFGS) parameter estimation. Pang used the Improved Iterative Scaling (IIS) method, but L-BFGS, a method that was invented after their paper was published, was found to out-perform both IIS and generalized iterative scaling (GIS), yet another parameter estimation method.
+The $\lambda_{i,c}$'s are feature-weigh parameters, where a large $\lambda_{i,c}$ means that $f_i$ is considered a strong indicator for class $c$. We use 30 iterations of the Limited-Memory Variable Metric (L-BFGS) parameter estimation. Pang used the Improved Iterative Scaling (IIS) method, but L-BFGS, a method that was invented after their paper was published, was found to out-perform both IIS and generalized iterative scaling (GIS), yet another parameter estimation method.
-We used Zhang Les (2004) Package Maximum Entropy Modeling Toolkit for Python and C++ [link] [src], with no special configuration.
+We used Zhang Le's (2004) Package Maximum Entropy Modeling Toolkit for Python and C++ \cite{Le}, with no special configuration.
\subsection{The Support Vector Machine Classifier}
@@ -123,7 +122,7 @@ \subsection{The Support Vector Machine Classifier}
$$\forall i, \zeta_i \ge 0$$
$$\forall i, y_i (\vec{x}_i^T \cdot \vec{B} + B0) \ge 1 - \zeta_i $$
-For this paper, we use the PyML implementation of SVMs, which uses the liblinear optimizer to actually find the separating hyperplane. Of the three classifiers, this was the slowest to train, as it suffers from the curse of dimensionalit
+For this paper, we use the PyML implementation of SVMs \cite{PyML}, which uses the liblinear optimizer to actually find the separating hyperplane. Of the three classifiers, this was the slowest to train, as it suffers from the curse of dimensionalit
\section{Experimental Setup}
We used documents from the movie review dataset and ran 3-fold cross validation in a number of test configurations. We ignored case and treated punctuation marks as separate lexical items.
@@ -140,7 +139,7 @@ \subsection{Feature Counting Method}
\subsection{Conditional Independence Assumption}
-The Bayes classifier depends on a conditional independence assumption, meaning that the model it predicts assumes that the probability of a given word is independent of the other words. Clearly, this assumption does not hold. Nevertheless, the Bayes classifier functions well, in part because the positive and negative correlations between features tend to cancel each other out [Zhang].
+The Bayes classifier depends on a conditional independence assumption, meaning that the model it predicts assumes that the probability of a given word is independent of the other words. Clearly, this assumption does not hold. Nevertheless, the Bayes classifier functions well, in part because the positive and negative correlations between features tend to cancel each other out \cite{Zhang}.
We found a huge difference between results of Naive Bayes and Maximum Entropy for positive testing accuracy and negative testing accuracy. Maximum Entropy, which makes no unfounded assumptions about the data, gave very similar results for positive tests and negative tests with a 0.2\% difference on average. On the other hand, positive and negative results from Naive Bayes, which assumes conditional independence, varies by 27.5\% on average, with the worst cases performing on test configurations using frequency, averaging 40\% difference. These disparities suggest evidence that the movie dataset does not satisfy the conditional independence assumption.
@@ -168,7 +167,7 @@ \subsection{Position Tagging}
Position tagging was not helpful. For bigrams, it harmed performance by around 5\% in most cases, and for unigrams, it was not helpful. If reviews end up not actually following the model specified or if the model has no bearing on where the relevant data is, position tagging will be harmful because it increases the dimensionality of the input without increasing the information content.
\subsection{Part of Speech Tagging}
-We appended POS tags to every word using Oliver Mason’s Qtag program [src]. This serves as a rough way to disambiguate words that may hold different meanings in different contexts. For example, it would distinguish the different uses of “love” in ``I love this movie'' versus ``This is a love story.'' However, it turns out that word disambiguation is a much more complicated problem, as POS says nothing to distinguish between the meaning of cold in ``I was a bit cold during the movie'' and ``The cold murderer chilled my heart.''
+We appended POS tags to every word using Oliver Mason’s Qtag program \cite{qtag}. This serves as a rough way to disambiguate words that may hold different meanings in different contexts. For example, it would distinguish the different uses of “love” in ``I love this movie'' versus ``This is a love story.'' However, it turns out that word disambiguation is a much more complicated problem, as POS says nothing to distinguish between the meaning of cold in ``I was a bit cold during the movie'' and ``The cold murderer chilled my heart.''
Part of speech tagging was not very helpful for unigram results; in fact, the NB classifier did slightly worse with parts of speech tagged when using unigrams. However, when using bigrams, the MaxEnt and SVM classifiers did significantly better, achieving 3-4\% better accuracy with part of speech tagging when measuring frequency and presence information.
@@ -184,7 +183,7 @@ \subsection{Majority Voting}
Majority voting in some cases provided a small but significant improvement over the classifiers alone; combining Bayes, MaxEnt, and SVM classifiers over the same data provided a three to four percent boost over the best of the individual classifiers alone.
\subsection{Neighboring Domain Data}
-Mostly out of curiosity, we wanted to see how our test configurations will perform when training on the movie dataset and testing on the Yelp dataset, an external out-of-domain dataset. We preprocessed the Yelp dataset such that it matched the format of the movie dataset and selected 1000 of each of the 1-5 star rating reviews. For evaluation purposes, we scored the accuracy on only 1-star and 5-star reviews, giving our testbed only high-confidence negative and positive reviews, respectively. The score was simply the average of the two accuracies.
+Mostly out of curiosity, we wanted to see how our test configurations will perform when training on the movie dataset and testing on the Yelp dataset, an external out-of-domain dataset. We preprocessed the Yelp dataset\cite{yelp} such that it matched the format of the movie dataset and selected 1000 of each of the 1-5 star rating reviews. For evaluation purposes, we scored the accuracy on only 1-star and 5-star reviews, giving our testbed only high-confidence negative and positive reviews, respectively. The score was simply the average of the two accuracies.
Across the board, the classifiers has a harder time with the Yelp dataset as compared to the movie dataset, performing between 56.0\% and 75.2\%. The respective lowest and highest performing configurations scored at 67.0\% and 84.0\% on the movie dataset.
View
50 fpbib.bib
@@ -1,4 +1,4 @@
-@InProceedings{Pang+Lee+Vaithyanathan:02a,
+@InProceedings{Pang,
author = {Bo Pang and Lillian Lee and Shivakumar Vaithyanathan},
title = {Thumbs up? {Sentiment} Classification using Machine Learning Techniques},
booktitle = "Proceedings of the 2002 Conference on Empirical Methods in Natural
@@ -14,3 +14,51 @@ @inproceedings{Zhang
year = 2004
}
+@inproceedings{Liu,
+author = {Doug C. Liu and Jorge Nocedal},
+title = {On the Limited Memory BFGS Method for Large Scale Optimization},
+booktitle = {Mathematical Programming 45},
+pages = {503--528},
+year = 1989,
+}
+
+@book{Manning,
+author = {Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze},
+title = {Introduction to Information Retrieval},
+publisher = {Cambridge University Press},
+year = 2008
+}
+
+@inproceedings{Jaynes,
+author = {E.T. Jaynes},
+title = {Information Theory and Statistical Mechanics},
+booktitle = {The Physical Review},
+volume = 106,
+year = 1957
+}
+
+@misc{Le,
+author={Zhang Le},
+title ={Maximum Entropy Modeling Toolkit for Python and C++},
+year=2011,
+howpublished ="http://homepages.inf.ed.ac.uk/lzhang10/maxent\_toolkit.html"
+}
+
+@misc{PyML,
+author={Asa Ben-Hur},
+title={PyML - Machine Learning in Python},
+year = 2011,
+howpublished = "http://pyml.sourceforge.net/"
+}
+
+@misc{qtag,
+author={Oliver Mason},
+title={QTag},
+howpublished = "http://phrasys.net/uob/om/software"
+}
+
+@misc{yelp,
+author={Yelp},
+title = {Yelp Academic Dataset},
+howpublished = "http://www.yelp.com/academic\_dataset"
+}
Please sign in to comment.
Something went wrong with that request. Please try again.