Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use information about last checkpoint on preemption #477

Open
alculquicondor opened this issue Dec 14, 2022 · 9 comments
Open

Use information about last checkpoint on preemption #477

alculquicondor opened this issue Dec 14, 2022 · 9 comments
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@alculquicondor
Copy link
Contributor

alculquicondor commented Dec 14, 2022

This is known as cooperative preemption

If the workload does checkpointing, then we can assume they are able to communicate the latest checkpoint via a status condition. We can take that into account when selecting victims and prioritize ones that checkpointed lately.

We can update the existing design doc for preemption to include this.

Originally posted by @ahg-g in #83 (comment)

@mwielgus
Copy link
Contributor

mwielgus commented Jan 3, 2023

That may potentially create a bad incentive to not publish the checkpoints in low priority jobs or the job will have a higher chances of being preempted (vs those that doesn't do it).

@alculquicondor
Copy link
Contributor Author

alculquicondor commented Jan 3, 2023

We could devise a policy to provide incentive to setting the checkpoint.

For example: the assumed checkpoint of a workload that doesn't define any is equal to it's the maximum of its startTime and the median of the startTime of the workloads that define one.

@alculquicondor
Copy link
Contributor Author

Although that might give an incentive to publish one checkpoint and never do it again. But any system where there is cooperative preemption has the same issue. I suppose it is called cooperative for a reason :)

@ahg-g
Copy link
Contributor

ahg-g commented Jan 3, 2023

Right, cooperative preemption by design assumes that the jobs play nicely. This is not uncommon in environments where researchers share a cluster and use common libraries in their jobs that have builtin support for checkpointing.

@mwielgus
Copy link
Contributor

mwielgus commented Jan 3, 2023

I'm wondering how much cooperativeness should assumed in the system. In the extreme, exaggerated case we wouldn't need any quotas and queues if everyone tried to play nicely.
People are nice up to a point when they learn that their goodwill is being exploited to their disadvantage. And here, publishing the status works against them, unless there is some other benefit that can balance the chances of being preempted first.

@ahg-g
Copy link
Contributor

ahg-g commented Jan 4, 2023

In the extreme, exaggerated case we wouldn't need any quotas and queues if everyone tried to play nicely.

You will still need quotas and queues to automate "playing nice".

People are nice up to a point when they learn that their goodwill is being exploited to their disadvantage. And here, publishing the status works against them, unless there is some other benefit that can balance the chances of being preempted first.

Users have a strong incentive to checkpoint if their jobs run for a long time.

As for setting the status, a common setup is that users use sdks to deploy their workloads, those sdks are generally controlled by the batch admin / platform team and probably use common libraries for checkpointing that will force setting this value.

Having said that, I think we want to distinguish between having the status and the incentives of setting it, the later can be improved as a followup if needed and based on user feedback.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 4, 2023
@kerthcet
Copy link
Contributor

kerthcet commented Apr 5, 2023

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 5, 2023
@alculquicondor
Copy link
Contributor Author

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Apr 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

6 participants