Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Pod Number Exception in the sync mode if reattach_on_restart parameter is False #39329

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

e-galan
Copy link
Contributor

@e-galan e-galan commented Apr 30, 2024

Prevent KubernetesPodOperator from raising an exception in a rare scenario, wherein the task is running in the sync mode with parameter reattach_on_restart equal to False, and the first task attempt fails because the task process is killed externally by the Kubernetes cluster or another process.

If the task is killed externally, it breaks the execution flow (including any try/except blocks) and immediately exists the task, resulting in a situation where the pod created for the first task run try is not properly deleted / updated, and consequently in the pod number exception, which will repeat in the next task tries until the dag will fail completely.

Behavior before the fix:

  1. KubernetesPodOperator starts a new task.
  2. A k8s pod is created to process the task.
  3. For some reason the task in the pod is killed externally and exits with some code (-9 for example).
  4. Since the reattach_on_restart parameter is set to False, the operator does not try to restart the task in the same pod for the next attempt, and tries to create a new one while the original pod still exists with the same labels.
  5. The new pod is created.
  6. Before continuing the task, KubernetesPodOperator tries to find the pod using the pod labels stored in the task context.
  7. 2 pods with such labels are found, resulting in the exception ("More than one pod running with labels").
  8. The exception continues to be raised on the next tries.

Behavior after the fix:
1-6. Same behavior.
7. 2 pods with such labels are found.
9. If reattach_on_restart is False, then we loop through the pods and pick the one that was created last and assign it to be used for the next attempt.
10. We will update the labels of the previous pod and, depending on the value of the on_finish_action parameter, either keep or remove it.
11. The task will continue without the exception.


^ Add meaningful description above
Read the Pull Request Guidelines for more information.
In case of fundamental code changes, an Airflow Improvement Proposal (AIP) is needed.
In case of a new dependency, check compliance with the ASF 3rd Party License Policy.
In case of backwards incompatible changes please leave a note in a newsfragment file, named {pr_number}.significant.rst or {issue_number}.significant.rst, in newsfragments.

Copy link
Collaborator

@dirrao dirrao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incorrect use of find_pod is causing this problem. Use the create_pod response object V1Pod for further operations instead of calling the find_pod. And cleanup the existing pods before starting the new pod.

@jedcunningham, @hussein-awala WDYT

@e-galan e-galan force-pushed the fix-kubernetes-pod-operator-reattach-on-restart-parameter branch from 89f92c1 to 0519403 Compare May 6, 2024 11:12
@e-galan
Copy link
Contributor Author

e-galan commented May 6, 2024

Incorrect use of find_pod is causing this problem. Use the create_pod response object V1Pod for further operations instead of calling the find_pod. And cleanup the existing pods before starting the new pod.

@jedcunningham, @hussein-awala WDYT

@dirrao It does not seem to me that cleanup() was designed to be run in the beginning of the execute_sync() method, given the many checks it contains. Should I refactor it or just create another method and call it at the start of execute_sync()?

Also, in the case with reattach_on_restart=False we still need to run find_pod() to actually find out that there is an extra pod left from a previous task attempt and then update its labels before calling cleanup() or something with a similar functionality. Otherwise it won't be called at all.

@e-galan e-galan requested review from Taragolis and dirrao May 7, 2024 09:20
@e-galan
Copy link
Contributor Author

e-galan commented May 8, 2024

@jedcunningham @hussein-awala could you take a look please?


Return the newly created pod to be used for the next run attempt.
"""
new_pod = pod_list.pop(self._get_most_recent_pod_index(pod_list))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@e-galan just for understanding, is last created pod can already be running other task?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@e-galan just for understanding, is last created pod can already be running other task?

@romsharon98 The last created pod is supposed to run the next attempt of the same task, but it can't because of the exception which forbids having more than one pod with the same labels.

The exception is raised in the scenario which I gave in the method doc-string and in the PR description.

@VladaZakharova
Copy link
Contributor

VladaZakharova commented May 16, 2024

Hi @potiuk @hussein-awala !
Can you please check changes one more time? Thanks!

@eladkal , this fix is kinda urgent, if it will be merged soon, can we please also include this one to google-provider release that you said we will have this weekend with changes for AutoML? Thank you : )

Copy link
Contributor

@shahar1 shahar1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, small comment

Comment on lines +546 to +552
elif num_pods > 1 and not self.reattach_on_restart:
self.log.warning("Found more than one pod running with labels %s, resolving ...", label_selector)
pod = self.process_duplicate_label_pods(pod_list)
self.log_matching_pod(pod=pod, context=context)
elif num_pods > 1:
raise IdenticalLabelPodError(f"More than one pod running with labels {label_selector}")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For a slightly better readability, maybe you could rephrase the elif as:

        elif num_pods > 1:
               if not self.reattach_on_restart:
                   self.log.warning("Found more than one pod running with labels %s, resolving ...", label_selector)
                   pod = self.process_duplicate_label_pods(pod_list)
                   self.log_matching_pod(pod=pod, context=context)
                else:
                   raise IdenticalLabelPodError(f"More than one pod running with labels {label_selector}")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:providers provider:cncf-kubernetes Kubernetes provider related issues
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants