Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

implement job allocation completions index enhancement #1072

Closed
wants to merge 4 commits into from

Conversation

fudali113
Copy link

No description provided.

@k8s-ci-robot
Copy link
Contributor

Welcome @fudali113!

It looks like this is your first PR to kubernetes/enhancements 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/enhancements has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels May 27, 2019
@k8s-ci-robot
Copy link
Contributor

Hi @fudali113. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory sig/apps Categorizes an issue or PR as relevant to SIG Apps. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels May 27, 2019
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: fudali113
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approver: mattfarina

If they are not already assigned, you can assign the PR to them by writing /assign @mattfarina in a comment when ready.

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@fudali113 fudali113 changed the title [WIP] Add job enhancement [WIP] implement job allocation completions index enhancement May 27, 2019
@fudali113 fudali113 changed the title [WIP] implement job allocation completions index enhancement implement job allocation completions index enhancement May 27, 2019
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label May 27, 2019
@fudali113
Copy link
Author

/assign @mattfarina

## Summary

[Job](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/)
is a Kubernetes CRD that help us run task, Job allows a user to specify the task or parallel task through either `podTemplate` and `completions`; but i use parallel task i must ensure task unrepeat in myself application, this is diffcult; if job support sahrds, i can use shards index to implement task unrepeat is so easy;
Copy link
Member

@k82cn k82cn Jun 2, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you give an example on what's kind of workload is running in the Pod, e.g. MPI, Spark? If it's pure customized application, why not crd + operator?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

btw, we implemented similar feature by CRD+Operator at https://github.com/volcano-sh/volcano/blob/master/docs/design/job-api.md#plugins-for-job which maybe helpful for your case.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

such as we have million user, each user has different user results, i need push one article to user where has some result。

we expected run these task in one platform,that platform must easy to use and in common use,so we choice k8s;and we don`t want to use more platform(because we need lower maintenance cost)

solution is more, but i expected i need be for maintenance do something is less and i see this feature is not implement in document. i think this feature can help more people easy to use k8s.(we company use argo to implement we case, but argo more feature not to use)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there others have similar case with yours?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we company has this case and i think this is a common question. because this implement can make job application easy to scale.

but my issue not more people comment support this implement.

@k82cn

Copy link
Member

@k82cn k82cn Sep 16, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fudali113 , IMO, that's a specific case, and prefer to handle that by CRD; anyway, approvers will give the finial decision :)


## Motivation

I have a case need shards to job, i want to use kubernetes implement my case; i read the [document](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#parallel-jobs), this is a not implement function, so i want implement it;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm... it's better to have a general motivation, e.g. resolve a problem that raised by several dev/user.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, i will addition it;

my motivation is i think shared index is a general feature in job platform, i want to easy to use job when user have relate demand and maintenance platform less


### Implementation Details/Notes/Constraints

In the current implementation, the Job controller not allocation index on `.completions != nil`; in my implementation, when `.completions != nil` i will through completions value and succeeded pods get the available indexes, i will record index to the job create pods, and set env to the pod (user application get index is easy); next i will allocation index to creating pod function, i will lock this step ensure not repeat index not more than one, if one index repeat, i will kill one on next this job created pod changes time;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do we persist the index, e.g. annotation, new field of Pod? If not persist, how to recover it when kube-controller-manager failover or change leader?

BTW, please also highlight the interface in design doc for review, e.g. env name ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, i will fix it;

i persist the index to labels, i choice label because i think easy to use on kubectl get po -l, label name is job-completions-index and i set JOB_COMPLETIONS_INDEX env in pod (user get this index in application easier to use )


### Risks and Mitigations

The major risk with this change is the additional load on the apiserver since we need frequent each pod for get succeeded index and error index.
Copy link
Member

@k82cn k82cn Jun 2, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How many "additional load" are there? A ballpark should be ok :)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. group with index on activePods
  2. remove duplicate index activePods
  3. allocation index need an lock

The above three points is additional operation,i think that additional load is acceptable. and i don`t know how to get how many additional load is better, can you help me ?

this is my first pr in k8s, maybe there's something wrong with it, look forward to your comments @k82cn

@fudali113
Copy link
Author

fudali113 commented Jul 23, 2019

@k82cn @prydonius @kow3ns PTAL, thanks

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 15, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 14, 2020
@palnabarun
Copy link
Member

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jan 14, 2020
@fengzixu
Copy link

fengzixu commented Jan 16, 2020

Is this enhancement still being handling? It seems like there is no response from the author for several months. If it is possible, I can re-create the enhancement like this and implement it.

@k82cn @palnabarun @kow3ns @janetkuo @fudali113

@mattfarina
Copy link
Contributor

/unassign

@mattfarina
Copy link
Contributor

@fudali113 adding to the agenda for the 2/24 SIG Apps meeting to discuss if you can make it.

@fudali113
Copy link
Author

fudali113 commented Mar 19, 2020

@fudali113 adding to the agenda for the 2/24 SIG Apps meeting to discuss if you can make it.

sorry,i just saw this comment now;What can i do to continue this PR ? @mattfarina

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 17, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 17, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closed this PR.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. sig/apps Categorizes an issue or PR as relevant to SIG Apps. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants