-
Notifications
You must be signed in to change notification settings - Fork 5
use Jobs when scheduling work #12
Comments
Is there a mechanism to stop a job after few tries? As we do not have a full control over parameters supplied, can we mark a job that cannot succeed as failed - e.g. the given image cannot be pulled? |
That's not it our job, the platform should take that work off of us ;)
Fridolín Pokorný <notifications@github.com> schrieb am Sa. 28. Apr. 2018 um
09:51:
… The cleanup-job could be removed...
Is there a mechanism to stop a job after few tries? As we do not have a
full control over parameters supplied, can we mark a job that cannot
succeed as failed - e.g. the given image cannot be pulled?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#12 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAP46yWwMkHbcz4RYbstnoOztuPq8y-vks5ttB9fgaJpZM4TrQIV>
.
|
@fridex can you list the conditions that must be met to delete an old job? In general the job strives to succeed, if the job can pull and image, that might be an application level error, but not a job level error. |
The current implementation waits 7 days. After that the given pod object is deleted from OpenShift.
Ack, we can definitely do that. We will need to slightly redesign how we report status on API endpoint. Something like: 1.) check if the given analysis has an entry in the graph database In this case we could report more detailed information about analysis status:
This way we can move to use purely jobs. As of now, we just report OpenShift pod status. Just a detail - checks 1. and 2. can be done in parallel. |
This is already done, let's close this issue. |
If we turn all workload some components schedule into Job rather than simple Pod, OpenShift will take care of cleaning them up.
The cleanup-job could be removed...
The text was updated successfully, but these errors were encountered: