Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gh 2118 return predictions during evaluation #2162

Conversation

MLDLMFZ
Copy link
Contributor

@MLDLMFZ MLDLMFZ commented Mar 17, 2021

In the evaluate method of models, give the option to return the predictions obtained on the eval data

MLDLMFZ and others added 7 commits March 8, 2021 17:37
…evaluation

# Conflicts:
#	flair/models/text_classification_model.py
…ns-during-evaluation' into flairNLPGH-2118-return-predictions-during-evaluation

# Conflicts:
#	flair/models/text_classification_model.py
Problem in the new way of returning predictions: the predictions are only contained in sentences if it was chosen memory_mode="full" during creation of the ClassificationDataset in the ClassificationCorpus creation. If the ClassificationCorpus has memory mode "partial", then the predicted labels are not contained in sentences in any case so the following optional removal has no effect. Predictions won't be accessible outside the eval routine in this case regardless whether return_predictions is True or False. TODO: fix this
@alanakbik
Copy link
Collaborator

@MLDLMFZ thanks for adding this!

@alanakbik alanakbik merged commit f1db12b into flairNLP:master Apr 7, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants