Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tuning with respect to F1? #68

Open
danyaljj opened this issue May 27, 2016 · 1 comment
Open

Tuning with respect to F1? #68

danyaljj opened this issue May 27, 2016 · 1 comment
Labels

Comments

@danyaljj
Copy link
Member

Is there a systematic way to tune a classifier's parameters (say output threshold etc) to maximize its F1?

@cowchipkid
Copy link
Contributor

The NerBenchmark thingy I wrote was actually intended for such a purpose. Doing parameters sweeps of this nature is something I have seen people do before to improve results. However, for the NER benchmark, you had to create a configuration file for experiment you wanted to run, the you would have to go back and compare the results when all of them completed. I think it would be really cool to be able to specify a parameter, a range of values to run, and an increment and have the system just go and run them all.

On May 27, 2016, at 1:42 PM, Daniel Khashabi notifications@github.com wrote:

Is there a systematic way to tune a classifier's parameters (say output threshold etc) to maximize its F1?


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub #68, or mute the thread https://github.com/notifications/unsubscribe/ACdHSyqzGxW8nTQmCchsdlp1rjH8b1vuks5qFzsNgaJpZM4Iow6r.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants