Join GitHub today
Autotune convergence strategy #891
I was checking the Autotune implementation and I'm trying to figure out the strategy used by fastText for the search.
For all parameters, the Autotuner have an updater (method
Each parameter has a specific range for the
Updates for each coefficient can be
After each validation (that uses a different combination of parameters) one score (f1-score only) it's stored and the best one will be used to train the full model using the best combination of parameters.
Is that correct or I'm missing something?
Hi @Allenlaobai7 ,
For the moment I suggest you to modify the source code, which is pretty straightforward.
@Celebio Thank you for the prompt response! I will give it a try. I asked this because a trial involving 87 epochs took me 1.5h and therefore requires autotune to run for a very long time.
update: I changed the code but still having trial for epoch=87. Any idea?