Navigation Menu

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Topology Aware Volume Scheduling #490

Closed
msau42 opened this issue Oct 20, 2017 · 28 comments
Closed

Topology Aware Volume Scheduling #490

msau42 opened this issue Oct 20, 2017 · 28 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. sig/storage Categorizes an issue or PR as relevant to SIG Storage. stage/stable Denotes an issue tracking an enhancement targeted for Stable/GA status tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team
Milestone

Comments

@msau42
Copy link
Member

msau42 commented Oct 20, 2017

Feature Description

  • One-line feature description (can be used as a release note): Make the scheduler aware of a Pod's volume's topology constraints, such as zone or node.
  • Primary contact (assignee): @msau42
  • Responsible SIGs: @kubernetes/sig-storage-feature-requests @kubernetes/sig-scheduling-feature-requests
  • Design proposal link (community repo):Volume topology aware scheduling design community#1054, Add more details to volume scheduling design community#1168
  • Reviewer(s) - (for LGTM) recommend having 2+ reviewers (at least one from code-area OWNERS file) agreed to review. Reviewers from multiple companies preferred:
  • Approver (likely from SIG/area to which feature belongs):
  • Feature target (which target equals to which milestone):
    • Alpha release target (x.y): 1.9
    • Beta release target (x.y): 1.10
    • Stable release target (x.y): 1.13

/assign

@k8s-ci-robot k8s-ci-robot added sig/storage Categorizes an issue or PR as relevant to SIG Storage. kind/feature Categorizes issue or PR as related to a new feature. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. labels Oct 20, 2017
@saad-ali saad-ali added this to the 1.9 milestone Oct 20, 2017
@idvoretskyi idvoretskyi added the stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status label Oct 24, 2017
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 22, 2018
@msau42
Copy link
Member Author

msau42 commented Jan 22, 2018

/remove-lifecycle-stale
/lifecycle frozen

I'm hoping to target beta in 1.10

@k8s-ci-robot k8s-ci-robot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Jan 22, 2018
@saad-ali saad-ali modified the milestones: v1.9, v1.10 Jan 23, 2018
@idvoretskyi idvoretskyi added tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team stage/beta Denotes an issue tracking an enhancement targeted for Beta status and removed lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status labels Jan 29, 2018
@idvoretskyi
Copy link
Member

@msau42 1.10 - beta, right?

@msau42
Copy link
Member Author

msau42 commented Jan 29, 2018

Yes, at least I will try to.

@msau42
Copy link
Member Author

msau42 commented Feb 27, 2018

1.10 Beta PR has been merged: kubernetes/kubernetes#59391

1.11 design for dynamic provisioning is here: kubernetes/community#1857

@ku-s-h
Copy link

ku-s-h commented Apr 16, 2018

@msau42 In my use case, the statefulset does not automatically schedule pods on the same node as the pv(local storage). To overcome this I have written a custom scheduler that schedules the pod on the node which has it's pv. This also works in the case where the pod goes down and needs to be brought up on the exact node that has it's pv.
Will the changes planned for 1.11 eliminate the need for a custom scheduler or is there a better way to do this?

@msau42
Copy link
Member Author

msau42 commented Apr 16, 2018

@mightwork the current feature should already be scheduling your pod to the correct node. Can you open an issue with the details of your PV and ping me?

@justaugustus
Copy link
Member

@msau42
Any plans for this in 1.11?

If so, can you please ensure the feature is up-to-date with the appropriate:

  • Description
  • Milestone
  • Assignee(s)
  • Labels:
    • stage/{alpha,beta,stable}
    • sig/*
    • kind/feature

cc @idvoretskyi

@msau42
Copy link
Member Author

msau42 commented Apr 17, 2018

Plan is to remain beta in 1.11.

#561 is tracking the dynamic provisioning work

@msau42
Copy link
Member Author

msau42 commented Oct 8, 2018

Plan for 1.13 is to go GA

@kacole2 kacole2 added tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team and removed tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team labels Oct 8, 2018
@kacole2
Copy link
Contributor

kacole2 commented Oct 8, 2018

/milestone v1.13

@k8s-ci-robot k8s-ci-robot added this to the v1.13 milestone Oct 8, 2018
@AishSundar
Copy link

@msau what work if left in k/k for this feature to go to GA in 1.13? Is there a list of pending PRs (code, tests and docs) that we can track on our end?

/stage stable

@k8s-ci-robot k8s-ci-robot added stage/stable Denotes an issue tracking an enhancement targeted for Stable/GA status and removed stage/beta Denotes an issue tracking an enhancement targeted for Beta status labels Oct 17, 2018
@AishSundar
Copy link

@kacole2 I updated the stage to be stable as per Michelle's comment here

@msau42
Copy link
Member Author

msau42 commented Oct 17, 2018

@npentrel
Copy link

Hi @msau42, I'm Naomi and working on docs for the 1.13 release. Could you open up a docs PR against the dev-1.13 branch as a placeholder for the needed docs updates?

@msau42
Copy link
Member Author

msau42 commented Oct 25, 2018

Docs pr here: kubernetes/website#10736

@guineveresaenger
Copy link

Hi @msau42 - I'm Enhancements Shadow for 1.13. Could you please update the release team with the likelihood of this enhancement making the 1.13 release? When we don't see any updates in a few days, we would appreciate an update.

Code slush begins on 11/9 and code freeze is 11/15.

Thank you!

@msau42
Copy link
Member Author

msau42 commented Nov 5, 2018

Yes, this is on track to meet code freeze.

e2e PR is close to being merged:
kubernetes/kubernetes#70362

I will open up the feature gate PR this week.

@AishSundar
Copy link

AishSundar commented Nov 12, 2018

@msau42 is the e2e test PR the only pending PR for this enhancement? I added milestone and priority to the PR so it can merge in Slush.

Is there a feature gate PR open?

@msau42
Copy link
Member Author

msau42 commented Nov 12, 2018

I updated the checklist above. Just docs are remaining

@claurence
Copy link

@msau42 Hi I'm the enhancements lead for 1.14 - It looks like this enhancement graduated to stable - is this issue still open? Is there work needed for the 1.14 release?

@msau42
Copy link
Member Author

msau42 commented Jan 15, 2019

I think we are done with feature tracking. There are some cleanup items that I have already opened issues for.
/close

@k8s-ci-robot
Copy link
Contributor

@msau42: Closing this issue.

In response to this:

I think we are done with feature tracking. There are some cleanup items that I have already opened issues for.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@kacole2 kacole2 added tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team and removed tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team labels Jul 15, 2019
@abdennour
Copy link

Actually, this is also a pain with statefulsets running on top of Amazon EKS Cluster.
Hopefully, the fix comes from kubernetes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. sig/storage Categorizes an issue or PR as relevant to SIG Storage. stage/stable Denotes an issue tracking an enhancement targeted for Stable/GA status tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team
Projects
None yet
Development

No branches or pull requests