Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(kuma-dp) remove the datplane of any completed Pod #1576

Merged
merged 1 commit into from
Feb 18, 2021

Conversation

nickolaev
Copy link
Contributor

Summary

Since now we do support Jobs, we need to also make sure there are no offline dataplane proxies, as a result of completed jobs but still existing pods.
Delete the relevant resource and make sure it does not appear again while the containers in the pod are terminated.

Documentation

  • Not needed

Signed-off-by: Nikolay Nikolaev <nikolay.nikolaev@konghq.com>
Copy link
Contributor

@jakubdyszkiewicz jakubdyszkiewicz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's fix(kuma-cp)

@nickolaev nickolaev changed the title fix(*) remove the datplane of any completed Pod fix(kuma-dp) remove the datplane of any completed Pod Feb 18, 2021
@lobkovilya
Copy link
Contributor

To be honest I don't think it's Kuma's duty to clean up Dataplanes for completed Pods. Dataplane already has an owner reference to the Pod, if Pod exists then it's fine that Dataplane exists. If Pod terminated then Dataplane should reflect it with health: ready: false and I think that what happens right now. When you create Job or CronJob there are mechanisms like successfulJobsHistoryLimit or ttlSecondsAfterFinished to clean up finished Pods

@nickolaev
Copy link
Contributor Author

To be honest I don't think it's Kuma's duty to clean up Dataplanes for completed Pods. Dataplane already has an owner reference to the Pod, if Pod exists then it's fine that Dataplane exists. If Pod terminated then Dataplane should reflect it with health: ready: false and I think that what happens right now. When you create Job or CronJob there are mechanisms like successfulJobsHistoryLimit or ttlSecondsAfterFinished to clean up finished Pods

You are absolutely right about all that you enumerated. However, there is one small detail that is the actual reason for this PR - jobs are short-lived services and are expected to complete relatively fast. Yes, there are mechanisms to cleanup jobs, yet if they are not set, or even before they do their cleanup, one will get unhealthy data planes. For no reason, other the fact that the job is meant to just end.

@nickolaev nickolaev merged commit 485dd70 into master Feb 18, 2021
@nickolaev nickolaev deleted the fix/remove_complete_job_dataplane branch February 18, 2021 16:56
mergify bot pushed a commit that referenced this pull request Feb 18, 2021
Signed-off-by: Nikolay Nikolaev <nikolay.nikolaev@konghq.com>
(cherry picked from commit 485dd70)

# Conflicts:
#	go.sum
#	pkg/plugins/resources/k8s/native/go.sum
nickolaev pushed a commit that referenced this pull request Feb 19, 2021
* fix(*) remove the datplane of any completed Pod (#1576)

Signed-off-by: Nikolay Nikolaev <nikolay.nikolaev@konghq.com>
(cherry picked from commit 485dd70)

# Conflicts:
#	go.sum
#	pkg/plugins/resources/k8s/native/go.sum

* fix(*) conflicts

Signed-off-by: Nikolay Nikolaev <nikolay.nikolaev@konghq.com>

Co-authored-by: Nikolay Nikolaev <nikolay.nikolaev@konghq.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants