-
-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluation Engine on main server is stuck since 2 days? #400
Comments
Since db update we are recalculating all runs. You don't see it on the
frontpage (as the results are still in elastic search cache), but on
backend everything is being recalculated.
If i go into office today (big if) i will make sure that recent runs get
evaluated with priority.
…On 25 Mar 2017 12:25, "giuseppec" ***@***.***> wrote:
E.g. the run https://www.openml.org/r/1853500 does not have evaluations.
This seems to be the case for runs since middle of last week. Evaluation
Engine possibly crashed?
Do you have an automatic notification mechanism that notifies you when
this is happening? Or do we still have to write issues when we see that
this happens?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#400>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/ACL7-oXzTUu57piWiEkli9-IsilzL4tpks5rpPmngaJpZM4MpC-I>
.
|
Assigning higher priority to recent runs would be great. You can close this issue when this is done. THX. |
... And running. Let's close this when eval engine is done calculating all evals again. Maybe there are some other people that need some calculations done with priority. |
Could you please manually let the eval engine run the runs from https://www.openml.org/s/30 ? Currently, evaluation/list does not show the results of these runs and I really need them. |
Got some nodes running. Depending on the number of runs, it should go fast. |
Apparently, it's already done. Could you check? |
E.g. the run https://www.openml.org/r/1853500 does not have evaluations. This seems to be the case for runs since middle of last week. Evaluation Engine possibly crashed?
Do you have an automatic notification mechanism that notifies you when this is happening? Or do we still have to write issues when we see that this happens?
The text was updated successfully, but these errors were encountered: