You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The NerBenchmark thingy I wrote was actually intended for such a purpose. Doing parameters sweeps of this nature is something I have seen people do before to improve results. However, for the NER benchmark, you had to create a configuration file for experiment you wanted to run, the you would have to go back and compare the results when all of them completed. I think it would be really cool to be able to specify a parameter, a range of values to run, and an increment and have the system just go and run them all.
Is there a systematic way to tune a classifier's parameters (say output threshold etc) to maximize its F1?
The text was updated successfully, but these errors were encountered: