-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Displaying metrics from results on CoNLl #14
Comments
It seems that you canceled the prediction of the model. This command will evaluate the model on the test set. |
The code get stuck in the |
Hi again @wangxinyu0922, So I managed to find what the issue was. I had to add the folder Now I have the same problem as before where the program keeps terminating after 4 mins and 53 seconds of execution and you can see the output below. Click to expand
|
It's strange. I have never used colab before so I do not know the reason. From your log, the program terminated when reading the model, so I suspect if there is a CPU memory limit for colab and it automatically kill the program when reading the model. |
Thanks for your reply. This seems highly unlikely as colab wouldn't just terminate execution due to insufficient resources without an error message informing that are not enough resources to complete the process. Anyway, I will attempt to run the code on my machine to see if I will get the same results and if I do I'll write back for further assistance. Thank you very much for helping me this far. |
@Dimiftb How is your progress on running locally? |
Close because of no response from the OP. |
Hi there,
So I believe I successfully managed to run your best model on CoNLl, however I was wondering how can I go about getting actual prediction values, e.g. Precision (Accuracy), F1 and Recall?
The current output that I have when running
python train.py --config config/conll_03_english.yaml --test
can be seen below:Click to expand!
The text was updated successfully, but these errors were encountered: