-
Notifications
You must be signed in to change notification settings - Fork 2
High variance in solution quality #76
Comments
As can be seen from the submission benchmark, there's a lot of variance in HGS. I didn't benchmark the static problem properly (using single seed), which led me to believe that our algorithm is outperforming the original baseline. It kinda is, but also not. |
For the curious, here are three plots showing the variance in objective value for 32 runs, each with a different seeds. Stopping after 10K no improvements (so disregarding max runtime and iterations). Here are some initial plots to demonstrate the variances over solving runs. Gaps are w.r.t. the best known solutions which I have collected over time. Ignore the outliers with 3+% mean, those were as a result of |
Comparing briefly the averages of my last benchmark run (ten different seeds): the average cost was |
In #33 I also briefly commented on the standard deviations of the dynamic runs. Those are much more variable than the static ones. |
Note to self: make a pull request of the notebook that I used to analyze the variance in quality. |
I don't have time to do this anymore. |
When running experiments with the same setting, I notice that there is high variance in solution qualities among different runs. See e.g. plots #75, where restarting can sometimes be very good, and sometimes be very bad.
I think it's worthwhile to let the algorithm run many times (without restarting) and analyze the solutions.
The text was updated successfully, but these errors were encountered: