Skip to content

Commit

Permalink
k8s: fix timeout problem
Browse files Browse the repository at this point in the history
* Leaves `watch` operation open without any timeout.

* Uses the `Background` property for Kubernetes DeleteOptions object
  so orphan deletion happens asynchronously. DeleteOptions is the
  recommended way of handeling deletions for Kubernetes objects see
  more here
  https://v1-9.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.9/#deleteoptions-v1-meta.
  • Loading branch information
Diego Rodriguez committed Jun 29, 2018
1 parent 347f218 commit db72aed
Showing 1 changed file with 2 additions and 3 deletions.
5 changes: 2 additions & 3 deletions reana_job_controller/k8s.py
Original file line number Diff line number Diff line change
Expand Up @@ -174,8 +174,7 @@ def watch_jobs(job_db):
try:
w = watch.Watch()
for event in w.stream(
batchv1_api_client.list_job_for_all_namespaces,
_request_timeout=60):
batchv1_api_client.list_job_for_all_namespaces):
logging.info(
'New Job event received: {0}'.format(event['type']))
job = event['object']
Expand Down Expand Up @@ -233,7 +232,7 @@ def watch_jobs(job_db):
job.metadata.name))
# Delete all depending pods.
delete_options = V1DeleteOptions(
propagation_policy='Foreground')
propagation_policy='Background')
batchv1_api_client.delete_namespaced_job(
job.metadata.name, job.metadata.namespace, delete_options)
job_db[job.metadata.name]['deleted'] = True
Expand Down

0 comments on commit db72aed

Please sign in to comment.