Evaluation
davidpicard edited this page Jun 4, 2012
·
1 revision
Evaluation metrics are encaspulated in the Evaluator
interface:
public interface Evaluator<T> {
/**
* Sets the classifier to use for evaluation
* @param cls the classifier
*/
public void setClassifier(Classifier<T> cls);
/**
* Sets the list of training samples on which to train the classifier
* @param trainlist the training set
*/
public void setTrainingSet(List<TrainingSample<T>> trainlist);
/**
* Sets the list of testing samples on which to evaluate the classifier
* @param testlist the testing set
*/
public void setTestingSet(List<TrainingSample<T>> testlist);
/**
* Run the training procedure and compute score.
*/
public void evaluate();
/**
* Tells the score resulting of the evaluation
* @return the score
*/
public double getScore();
}
Currently, accuracy and average precision are available as metrics. They are agnostic regarding data type.
To use The Evaluator
interface, you need a Classifier<T> c
(not trained), a training List<TrainingSample<T>> train
and a testing List<TrainingSample<T>> test
. You should set these using the setters methods of the Evaluator
:
eval.setClassifier(c);
eval.setTrainingSet(train);
eval.setTestingSet(test);
Evaluation is performed calling the evaluate()
method. The training is ensured by the Evaluator
itself:
eval.evaluate();
Scores can be obtained calling the getScore() method:
System.out.println("Accuracy: " + eval.getScore());