New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE REQUEST] need a PodNodeAffinity admission control just like PodNodeSelector #58198

Closed
Dieken opened this Issue Jan 12, 2018 · 12 comments

Comments

Projects
None yet
8 participants
@Dieken

Dieken commented Jan 12, 2018

According to https://kubernetes.io/docs/concepts/configuration/assign-pod-node/, nodeAffinity is more powerful than nodeSelector, and will replace nodeSelector, so could you provide another admission control plugin "PodNodeAffinity"?

I need PodNodeSelector or better PodNodeAffinity to dedicate some worker nodes for a namespace, so I can divide a physical huge K8S cluster into some small virtual K8S clusters.

/kind feature
/sig api-machinery

@k8s-ci-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-ci-robot

k8s-ci-robot Jan 15, 2018

Contributor

@wackxu: Reiterating the mentions to trigger a notification:
@kubernetes/sig-scheduling-feature-requests

In response to this:

@kubernetes/sig-scheduling-feature-requests

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Contributor

k8s-ci-robot commented Jan 15, 2018

@wackxu: Reiterating the mentions to trigger a notification:
@kubernetes/sig-scheduling-feature-requests

In response to this:

@kubernetes/sig-scheduling-feature-requests

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@wackxu

This comment has been minimized.

Show comment
Hide comment
Contributor

wackxu commented Jan 15, 2018

@k8s-ci-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-ci-robot

k8s-ci-robot Jan 15, 2018

Contributor

@wackxu: Reiterating the mentions to trigger a notification:
@kubernetes/sig-scheduling-feature-requests

In response to this:

@kubernetes/sig-scheduling-feature-requests

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Contributor

k8s-ci-robot commented Jan 15, 2018

@wackxu: Reiterating the mentions to trigger a notification:
@kubernetes/sig-scheduling-feature-requests

In response to this:

@kubernetes/sig-scheduling-feature-requests

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@wackxu

This comment has been minimized.

Show comment
Hide comment
@wackxu

wackxu Jan 15, 2018

Contributor

Is it needed since we have taint and toleration ? What is difference between PodNodeSelector and taint ? If it is necessary, I can help fix it.

Contributor

wackxu commented Jan 15, 2018

Is it needed since we have taint and toleration ? What is difference between PodNodeSelector and taint ? If it is necessary, I can help fix it.

@wackxu

This comment has been minimized.

Show comment
Hide comment
@wackxu
Contributor

wackxu commented Jan 15, 2018

@k82cn

This comment has been minimized.

Show comment
Hide comment
@k82cn

k82cn Jan 15, 2018

Member

@kubernetes/sig-scheduling-feature-requests

What is difference between PodNodeSelector and taint ?

  1. taint means which node pod can NOT use, selector means which nodes pod can ONLY use
  2. regarding the PodNodeSelector admission, it's a kind of hard restriction; toleration is a kind of soft restriction, e.g. any pod can add toleration to use those nodes.
Member

k82cn commented Jan 15, 2018

@kubernetes/sig-scheduling-feature-requests

What is difference between PodNodeSelector and taint ?

  1. taint means which node pod can NOT use, selector means which nodes pod can ONLY use
  2. regarding the PodNodeSelector admission, it's a kind of hard restriction; toleration is a kind of soft restriction, e.g. any pod can add toleration to use those nodes.
@resouer

This comment has been minimized.

Show comment
Hide comment
@resouer

resouer Jan 18, 2018

Member

While one thing still not clear is what is this admission control plugin expected for?

Member

resouer commented Jan 18, 2018

While one thing still not clear is what is this admission control plugin expected for?

@embano1

This comment has been minimized.

Show comment
Hide comment
@embano1

embano1 Jan 21, 2018

@resouer PodNodeSelector admission controller has been useful in scenarios where a cluster admin wants to enforce certain placement logic and rules (whitelisting) during admission time.

If there is no real use case for PodNodeSelector, there is none for PodNodeAffinity. Depends on how many people are using PodNodeSelector, for which reason and whether conceptually there's a better approach, that is not requiring a new admission controller plugin.

embano1 commented Jan 21, 2018

@resouer PodNodeSelector admission controller has been useful in scenarios where a cluster admin wants to enforce certain placement logic and rules (whitelisting) during admission time.

If there is no real use case for PodNodeSelector, there is none for PodNodeAffinity. Depends on how many people are using PodNodeSelector, for which reason and whether conceptually there's a better approach, that is not requiring a new admission controller plugin.

@Dieken

This comment has been minimized.

Show comment
Hide comment
@Dieken

Dieken Jan 21, 2018

@resouer @embano1 @k82cn @wackxu

Currently I use PodNodeSelector to divide a physical k8s cluster into multiple sub clusters by namespaces, suppose I have four nodes node{1,2,3,4} and two namespaces ns{1,2}.

# pods in namespace ns1 can only use node1 and node2
kubectl label nodes node1 ns=ns1
kubectl label nodes node2 ns=ns1
kubectl annotate ns ns1 scheduler.alpha.kubernetes.io/node-selector=ns=ns1

# pods in namespace ns2 can only use node3 and node4
kubectl label nodes node3 ns=ns2
kubectl label nodes node4 ns=ns2
kubectl annotate ns ns2 scheduler.alpha.kubernetes.io/node-selector=ns=ns2

By this way, I don't have to hardcode nodeSelector rules into pod YAML files, it's very easy and flexible to manage on cluster level.

But in https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity:

nodeSelector continues to work as usual, but will eventually be deprecated, 
as node affinity can express everything that nodeSelector can express.

So I would like there will be admission plugin "PodNodeAffinity".

Dieken commented Jan 21, 2018

@resouer @embano1 @k82cn @wackxu

Currently I use PodNodeSelector to divide a physical k8s cluster into multiple sub clusters by namespaces, suppose I have four nodes node{1,2,3,4} and two namespaces ns{1,2}.

# pods in namespace ns1 can only use node1 and node2
kubectl label nodes node1 ns=ns1
kubectl label nodes node2 ns=ns1
kubectl annotate ns ns1 scheduler.alpha.kubernetes.io/node-selector=ns=ns1

# pods in namespace ns2 can only use node3 and node4
kubectl label nodes node3 ns=ns2
kubectl label nodes node4 ns=ns2
kubectl annotate ns ns2 scheduler.alpha.kubernetes.io/node-selector=ns=ns2

By this way, I don't have to hardcode nodeSelector rules into pod YAML files, it's very easy and flexible to manage on cluster level.

But in https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity:

nodeSelector continues to work as usual, but will eventually be deprecated, 
as node affinity can express everything that nodeSelector can express.

So I would like there will be admission plugin "PodNodeAffinity".

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Apr 21, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

fejta-bot commented Apr 21, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot May 21, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

fejta-bot commented May 21, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Jun 20, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

fejta-bot commented Jun 20, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment