New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with learning and saving classifier without performing any testing #1

Closed
niedakh opened this Issue Feb 28, 2016 · 3 comments

Comments

Projects
None yet
4 participants
@niedakh

niedakh commented Feb 28, 2016

I am trying to learn a classifier and store it for the future without the need to test it at the time of classification. I am using the following command to make sure no splitting for test is done (without it meka reported test instance count larger than 0):

/usr/bin/java -cp "/home/niedakh/scikit/meka/meka-release-1.9.0/lib/*" meka.classifiers.multilabel.LC -W weka.classifiers.bayes.NaiveBayes -threshold 0 -verbosity 5 -split-percentage 100 -t ~/engine/scikit-multilearn/meka/data/scene-train.arff -d classifier.dump

I am receiving an error:

java.lang.ArrayIndexOutOfBoundsException: 0
    at meka.core.MLEvalUtils.getMLStats(MLEvalUtils.java:57)
    at meka.core.Result.getStats(Result.java:289)
    at meka.classifiers.multilabel.Evaluation.evaluateModel(Evaluation.java:263)
    at meka.classifiers.multilabel.Evaluation.runExperiment(Evaluation.java:187)
    at meka.classifiers.multilabel.ProblemTransformationMethod.runClassifier(ProblemTransformationMethod.java:172)
    at meka.classifiers.multilabel.ProblemTransformationMethod.evaluation(ProblemTransformationMethod.java:152)
    at meka.classifiers.multilabel.LC.main(LC.java:148)

@joergwicker

This comment has been minimized.

Show comment
Hide comment
@joergwicker

joergwicker Feb 29, 2016

Meka tried to do an evaluation in any case. And it ended up with an empty test set, and empty results. As a quick solution, I added a check if the train or test set is empty, then Meka does not evaluate and just trains on the full set. It is pushed it to the 1.9.1.-SNAPSHOT. But we might change the options, add a flag for evaluation or only training.

joergwicker commented Feb 29, 2016

Meka tried to do an evaluation in any case. And it ended up with an empty test set, and empty results. As a quick solution, I added a check if the train or test set is empty, then Meka does not evaluate and just trains on the full set. It is pushed it to the 1.9.1.-SNAPSHOT. But we might change the options, add a flag for evaluation or only training.

@jmread

This comment has been minimized.

Show comment
Hide comment
@jmread

jmread Feb 29, 2016

Contributor

That's a good point actually. Nice to have that fix, Joerg. Another quick
work-around for 1.9.0 is to simply specify the dataset again with the -T
flag (for test set). For example, -t dataset.arff -d classifier.dump -T
dataset.arff. Meka will train on the full dataset.arff and then dump the
classifier to disk. You can ignore the evaluation, and use a different test
set when you load the classifier from disk again. But better to have the
fix :-)

On 29 February 2016 at 01:03, Joerg Wicker notifications@github.com wrote:

Meka tried to do an evaluation in any case. And it ended up with an empty
test set, and empty results. As a quick solution, I added a check if the
train or test set is empty, then Meka does not evaluate and just trains on
the full set. It is pushed it to the 1.9.1.-SNAPSHOT. But we might change
the options, add a flag for evaluation or only training.


Reply to this email directly or view it on GitHub
#1 (comment).

Contributor

jmread commented Feb 29, 2016

That's a good point actually. Nice to have that fix, Joerg. Another quick
work-around for 1.9.0 is to simply specify the dataset again with the -T
flag (for test set). For example, -t dataset.arff -d classifier.dump -T
dataset.arff. Meka will train on the full dataset.arff and then dump the
classifier to disk. You can ignore the evaluation, and use a different test
set when you load the classifier from disk again. But better to have the
fix :-)

On 29 February 2016 at 01:03, Joerg Wicker notifications@github.com wrote:

Meka tried to do an evaluation in any case. And it ended up with an empty
test set, and empty results. As a quick solution, I added a check if the
train or test set is empty, then Meka does not evaluate and just trains on
the full set. It is pushed it to the 1.9.1.-SNAPSHOT. But we might change
the options, add a flag for evaluation or only training.


Reply to this email directly or view it on GitHub
#1 (comment).

@fracpete

This comment has been minimized.

Show comment
Hide comment
@fracpete

fracpete Mar 29, 2018

Member

Closing this issue mentioning that you can use the -no-eval flag to suppress evaluation.

Member

fracpete commented Mar 29, 2018

Closing this issue mentioning that you can use the -no-eval flag to suppress evaluation.

@fracpete fracpete closed this Mar 29, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment