New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No nodes are available that match all of the following predicates:: PodFitsHostPorts (1), PodToleratesNodeTaints (1). #49440

Closed
huangjiasingle opened this Issue Jul 22, 2017 · 20 comments

Comments

Projects
None yet
10 participants
@huangjiasingle

huangjiasingle commented Jul 22, 2017

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind feature

What happened:
when l operation like this: kubectl scale --replicas=2 deploy/demo -n oliver
l found the new pod was always pending, so l describe the pod , the message is : No nodes are available that match all of the following predicates:: PodFitsHostPorts (1), PodToleratesNodeTaints (1). the old version v1.6.X it's ok.

the cluster is one master and one node.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):
kubectl scale --replicas=2 deploy/depployName -n namespaceName

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version): v1.7.0
  • Cloud provider or hardware configuration**:
  • OS (e.g. from /etc/os-release):centos7.2
  • Kernel (e.g. uname -a):3.10.0-514.16.1.el7.x86_64
  • Install tools:kubeadm
  • Others:
@k8s-merge-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-merge-robot

k8s-merge-robot Jul 22, 2017

Contributor

@huangjiasingle
There are no sig labels on this issue. Please add a sig label by:

  1. mentioning a sig: @kubernetes/sig-<group-name>-<group-suffix>
    e.g., @kubernetes/contributor-experience-<group-suffix> to notify the contributor experience sig, OR

  2. specifying the label manually: /sig <label>
    e.g., /sig scalability to apply the sig/scalability label

Note: Method 1 will trigger an email to the group. You can find the group list here and label list here.
The <group-suffix> in the method 1 has to be replaced with one of these: bugs, feature-requests, pr-reviews, test-failures, proposals

Contributor

k8s-merge-robot commented Jul 22, 2017

@huangjiasingle
There are no sig labels on this issue. Please add a sig label by:

  1. mentioning a sig: @kubernetes/sig-<group-name>-<group-suffix>
    e.g., @kubernetes/contributor-experience-<group-suffix> to notify the contributor experience sig, OR

  2. specifying the label manually: /sig <label>
    e.g., /sig scalability to apply the sig/scalability label

Note: Method 1 will trigger an email to the group. You can find the group list here and label list here.
The <group-suffix> in the method 1 has to be replaced with one of these: bugs, feature-requests, pr-reviews, test-failures, proposals

@xiangpengzhao

This comment has been minimized.

Show comment
Hide comment
@xiangpengzhao

xiangpengzhao Jul 22, 2017

Member

/sig scheduling

Member

xiangpengzhao commented Jul 22, 2017

/sig scheduling

@xiangpengzhao

This comment has been minimized.

Show comment
Hide comment
@xiangpengzhao

xiangpengzhao Jul 22, 2017

Member

@huangjiasingle could you please provide more info, such as pod manifest, kubectl describe nodes?

Member

xiangpengzhao commented Jul 22, 2017

@huangjiasingle could you please provide more info, such as pod manifest, kubectl describe nodes?

@huangjiasingle

This comment has been minimized.

Show comment
Hide comment
@huangjiasingle

huangjiasingle Jul 22, 2017

@xiangpengzhao the node info like this:

Name:     slave1
Role:
Labels:     beta.kubernetes.io/arch=amd64
      beta.kubernetes.io/os=linux
      kubernetes.io/hostname=slave1
Annotations:    flannel.alpha.coreos.com/backend-data={"VtepMAC":"66:c7:aa:ec:65:6d"}
      flannel.alpha.coreos.com/backend-type=vxlan
      flannel.alpha.coreos.com/kube-subnet-manager=true
      flannel.alpha.coreos.com/public-ip=192.168.99.138
      node.alpha.kubernetes.io/ttl=0
      volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:     <none>
CreationTimestamp:  Sat, 22 Jul 2017 14:36:55 +0800
Conditions:
  Type      Status  LastHeartbeatTime     LastTransitionTime      Reason        Message
  ----      ------  -----------------     ------------------      ------        -------
  OutOfDisk     False   Sat, 22 Jul 2017 22:56:19 +0800   Sat, 22 Jul 2017 17:56:02 +0800   KubeletHasSufficientDisk  kubelet has sufficient disk space available
  MemoryPressure  False   Sat, 22 Jul 2017 22:56:19 +0800   Sat, 22 Jul 2017 17:56:02 +0800   KubeletHasSufficientMemory  kubelet has sufficient memory available
  DiskPressure    False   Sat, 22 Jul 2017 22:56:19 +0800   Sat, 22 Jul 2017 17:56:02 +0800   KubeletHasNoDiskPressure  kubelet has no disk pressure
  Ready     True  Sat, 22 Jul 2017 22:56:19 +0800   Sat, 22 Jul 2017 17:56:02 +0800   KubeletReady      kubelet is posting ready status
Addresses:
  InternalIP: 192.168.99.138
  Hostname: slave1
Capacity:
 cpu:   1
 memory:  3882124Ki
 pods:    110
Allocatable:
 cpu:   1
 memory:  3779724Ki
 pods:    110
System Info:
 Machine ID:      9b7c327a42ae46e7b5048701303560bd
 System UUID:     9B7C327A-42AE-46E7-B504-8701303560BD
 Boot ID:     02957bc6-f04f-4842-891b-967043aea251
 Kernel Version:    3.10.0-514.16.1.el7.x86_64
 OS Image:      CentOS Linux 7 (Core)
 Operating System:    linux
 Architecture:      amd64
 Container Runtime Version: docker://1.12.6
 Kubelet Version:   v1.7.0
 Kube-Proxy Version:    v1.7.0
PodCIDR:      10.244.2.0/24
ExternalID:     slave1
Non-terminated Pods:    (4 in total)
  Namespace     Name          CPU Requests  CPU Limits  Memory Requests Memory Limits
  ---------     ----          ------------  ----------  --------------- -------------
  kube-system     kube-dns-2425271678-76vhf   260m (26%)  0 (0%)    110Mi (2%)  170Mi (4%)
  kube-system     kube-flannel-ds-n95fw     0 (0%)    0 (0%)    0 (0%)    0 (0%)
  kube-system     kube-proxy-qp7rp      0 (0%)    0 (0%)    0 (0%)    0 (0%)
  oliver      demo-2023766708-8f9rw     200m (20%)  200m (20%)  512Mi (13%) 512Mi (13%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits  Memory Requests Memory Limits
  ------------  ----------  --------------- -------------
  460m (46%)  200m (20%)  622Mi (16%) 682Mi (18%)
Events:
  FirstSeen LastSeen  Count From    SubObjectPath Type    Reason      Message
  --------- --------  ----- ----    ------------- --------  ------      -------
  4m    4m    1 kubelet, slave1     Normal    Starting    Starting kubelet.
  4m    4m    1 kubelet, slave1     Normal    NodeAllocatableEnforced Updated Node Allocatable limit across pods
  4m    4m    10  kubelet, slave1     Normal    NodeHasSufficientDisk Node slave1 status is now: NodeHasSufficientDisk
  4m    4m    10  kubelet, slave1     Normal    NodeHasSufficientMemory Node slave1 status is now: NodeHasSufficientMemory
  4m    4m    10  kubelet, slave1     Normal    NodeHasNoDiskPressure Node slave1 status is now: NodeHasNoDiskPressure
  4m    4m    1 kubelet, slave1     Warning   Rebooted    Node slave1 has been rebooted, boot id: 02957bc6-f04f-4842-891b-967043aea251

the pod info like this :

NAME                    READY     STATUS    RESTARTS   AGE
demo-2023766708-8f9rw   1/1       Running   1          4h
demo-2023766708-ppm7v   0/1       Pending   0          3h

kubectl describe po demo-2023766708-ppm7v -n oliver

Name:   demo-2023766708-ppm7v
Namespace:  oliver
Node:   <none>
Labels:   name=demo
    pod-template-hash=2023766708
Annotations:  kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"oliver","name":"demo-2023766708","uid":"e1f11cac-6ec4-11e7-aa13-0800278ff542","ap...
    name=demo
Status:   Pending
IP:
Created By: ReplicaSet/demo-2023766708
Controlled By:  ReplicaSet/demo-2023766708
Containers:
  demo:
    Image:  hub.mini-paas.io/test:v1
    Port: 8080/TCP
    Limits:
      cpu:  200m
      memory: 512Mi
    Requests:
      cpu:    200m
      memory:   512Mi
    Environment:  <none>
    Mounts:
      /opt/name from web (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-dhsfm (ro)
Conditions:
  Type    Status
  PodScheduled  False
Volumes:
  web:
    Type: ConfigMap (a volume populated by a ConfigMap)
    Name: web
    Optional: false
  default-token-dhsfm:
    Type: Secret (a volume populated by a Secret)
    SecretName: default-token-dhsfm
    Optional: false
QoS Class:  Guaranteed
Node-Selectors: <none>
Tolerations:  node.alpha.kubernetes.io/notReady:NoExecute for 300s
    node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
  FirstSeen LastSeen  Count From      SubObjectPath Type    Reason      Message
  --------- --------  ----- ----      ------------- --------  ------      -------
  3h    3h    127 default-scheduler     Warning   FailedScheduling  No nodes are available that match all of the following predicates:: PodFitsHostPorts (1), PodToleratesNodeTaints (1).
  8m    8m    1 default-scheduler     Warning   FailedScheduling  No nodes are available that match all of the following predicates:: PodFitsHostPorts (1), PodToleratesNodeTaints (1).
  7m    36s   28  default-scheduler     Warning   FailedScheduling  No nodes are available that match all of the following predicates:: PodFitsHostPorts (1), PodToleratesNodeTaints (1).

the deploy yaml like this :

kubectl get deploy demo -n oliver -oyaml

apiVersion: v1
items:
- apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    annotations:
      deployment.kubernetes.io/revision: "3"
      name: demo
    creationTimestamp: 2017-07-22T10:02:34Z
    generation: 12
    labels:
      name: demo
    name: demo
    namespace: oliver
    resourceVersion: "57700"
    selfLink: /apis/extensions/v1beta1/namespaces/oliver/deployments/demo
    uid: e1df583c-6ec4-11e7-aa13-0800278ff542
  spec:
    replicas: 2
    selector:
      matchLabels:
        name: demo
    strategy:
      rollingUpdate:
        maxSurge: 120%
        maxUnavailable: 20%
      type: RollingUpdate
    template:
      metadata:
        annotations:
          name: demo
        creationTimestamp: null
        labels:
          name: demo
        name: demo
        namespace: oliver
      spec:
        containers:
        - image: hub.mini-paas.io/test:v1
          imagePullPolicy: IfNotPresent
          name: demo
          ports:
          - containerPort: 8080
            hostPort: 8080
            protocol: TCP
          resources:
            limits:
              cpu: 200m
              memory: 512Mi
            requests:
              cpu: 200m
              memory: 512Mi
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /opt/name
            name: web
            subPath: name
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        schedulerName: default-scheduler
        securityContext: {}
        terminationGracePeriodSeconds: 30
        volumes:
        - configMap:
            defaultMode: 420
            items:
            - key: name
              path: name
            name: web
          name: web
  status:
    availableReplicas: 1
    conditions:
    - lastTransitionTime: 2017-07-22T11:03:01Z
      lastUpdateTime: 2017-07-22T11:03:01Z
      message: Deployment does not have minimum availability.
      reason: MinimumReplicasUnavailable
      status: "False"
      type: Available
    observedGeneration: 12
    readyReplicas: 1
    replicas: 2
    unavailableReplicas: 1
    updatedReplicas: 2
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

huangjiasingle commented Jul 22, 2017

@xiangpengzhao the node info like this:

Name:     slave1
Role:
Labels:     beta.kubernetes.io/arch=amd64
      beta.kubernetes.io/os=linux
      kubernetes.io/hostname=slave1
Annotations:    flannel.alpha.coreos.com/backend-data={"VtepMAC":"66:c7:aa:ec:65:6d"}
      flannel.alpha.coreos.com/backend-type=vxlan
      flannel.alpha.coreos.com/kube-subnet-manager=true
      flannel.alpha.coreos.com/public-ip=192.168.99.138
      node.alpha.kubernetes.io/ttl=0
      volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:     <none>
CreationTimestamp:  Sat, 22 Jul 2017 14:36:55 +0800
Conditions:
  Type      Status  LastHeartbeatTime     LastTransitionTime      Reason        Message
  ----      ------  -----------------     ------------------      ------        -------
  OutOfDisk     False   Sat, 22 Jul 2017 22:56:19 +0800   Sat, 22 Jul 2017 17:56:02 +0800   KubeletHasSufficientDisk  kubelet has sufficient disk space available
  MemoryPressure  False   Sat, 22 Jul 2017 22:56:19 +0800   Sat, 22 Jul 2017 17:56:02 +0800   KubeletHasSufficientMemory  kubelet has sufficient memory available
  DiskPressure    False   Sat, 22 Jul 2017 22:56:19 +0800   Sat, 22 Jul 2017 17:56:02 +0800   KubeletHasNoDiskPressure  kubelet has no disk pressure
  Ready     True  Sat, 22 Jul 2017 22:56:19 +0800   Sat, 22 Jul 2017 17:56:02 +0800   KubeletReady      kubelet is posting ready status
Addresses:
  InternalIP: 192.168.99.138
  Hostname: slave1
Capacity:
 cpu:   1
 memory:  3882124Ki
 pods:    110
Allocatable:
 cpu:   1
 memory:  3779724Ki
 pods:    110
System Info:
 Machine ID:      9b7c327a42ae46e7b5048701303560bd
 System UUID:     9B7C327A-42AE-46E7-B504-8701303560BD
 Boot ID:     02957bc6-f04f-4842-891b-967043aea251
 Kernel Version:    3.10.0-514.16.1.el7.x86_64
 OS Image:      CentOS Linux 7 (Core)
 Operating System:    linux
 Architecture:      amd64
 Container Runtime Version: docker://1.12.6
 Kubelet Version:   v1.7.0
 Kube-Proxy Version:    v1.7.0
PodCIDR:      10.244.2.0/24
ExternalID:     slave1
Non-terminated Pods:    (4 in total)
  Namespace     Name          CPU Requests  CPU Limits  Memory Requests Memory Limits
  ---------     ----          ------------  ----------  --------------- -------------
  kube-system     kube-dns-2425271678-76vhf   260m (26%)  0 (0%)    110Mi (2%)  170Mi (4%)
  kube-system     kube-flannel-ds-n95fw     0 (0%)    0 (0%)    0 (0%)    0 (0%)
  kube-system     kube-proxy-qp7rp      0 (0%)    0 (0%)    0 (0%)    0 (0%)
  oliver      demo-2023766708-8f9rw     200m (20%)  200m (20%)  512Mi (13%) 512Mi (13%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits  Memory Requests Memory Limits
  ------------  ----------  --------------- -------------
  460m (46%)  200m (20%)  622Mi (16%) 682Mi (18%)
Events:
  FirstSeen LastSeen  Count From    SubObjectPath Type    Reason      Message
  --------- --------  ----- ----    ------------- --------  ------      -------
  4m    4m    1 kubelet, slave1     Normal    Starting    Starting kubelet.
  4m    4m    1 kubelet, slave1     Normal    NodeAllocatableEnforced Updated Node Allocatable limit across pods
  4m    4m    10  kubelet, slave1     Normal    NodeHasSufficientDisk Node slave1 status is now: NodeHasSufficientDisk
  4m    4m    10  kubelet, slave1     Normal    NodeHasSufficientMemory Node slave1 status is now: NodeHasSufficientMemory
  4m    4m    10  kubelet, slave1     Normal    NodeHasNoDiskPressure Node slave1 status is now: NodeHasNoDiskPressure
  4m    4m    1 kubelet, slave1     Warning   Rebooted    Node slave1 has been rebooted, boot id: 02957bc6-f04f-4842-891b-967043aea251

the pod info like this :

NAME                    READY     STATUS    RESTARTS   AGE
demo-2023766708-8f9rw   1/1       Running   1          4h
demo-2023766708-ppm7v   0/1       Pending   0          3h

kubectl describe po demo-2023766708-ppm7v -n oliver

Name:   demo-2023766708-ppm7v
Namespace:  oliver
Node:   <none>
Labels:   name=demo
    pod-template-hash=2023766708
Annotations:  kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"oliver","name":"demo-2023766708","uid":"e1f11cac-6ec4-11e7-aa13-0800278ff542","ap...
    name=demo
Status:   Pending
IP:
Created By: ReplicaSet/demo-2023766708
Controlled By:  ReplicaSet/demo-2023766708
Containers:
  demo:
    Image:  hub.mini-paas.io/test:v1
    Port: 8080/TCP
    Limits:
      cpu:  200m
      memory: 512Mi
    Requests:
      cpu:    200m
      memory:   512Mi
    Environment:  <none>
    Mounts:
      /opt/name from web (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-dhsfm (ro)
Conditions:
  Type    Status
  PodScheduled  False
Volumes:
  web:
    Type: ConfigMap (a volume populated by a ConfigMap)
    Name: web
    Optional: false
  default-token-dhsfm:
    Type: Secret (a volume populated by a Secret)
    SecretName: default-token-dhsfm
    Optional: false
QoS Class:  Guaranteed
Node-Selectors: <none>
Tolerations:  node.alpha.kubernetes.io/notReady:NoExecute for 300s
    node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
  FirstSeen LastSeen  Count From      SubObjectPath Type    Reason      Message
  --------- --------  ----- ----      ------------- --------  ------      -------
  3h    3h    127 default-scheduler     Warning   FailedScheduling  No nodes are available that match all of the following predicates:: PodFitsHostPorts (1), PodToleratesNodeTaints (1).
  8m    8m    1 default-scheduler     Warning   FailedScheduling  No nodes are available that match all of the following predicates:: PodFitsHostPorts (1), PodToleratesNodeTaints (1).
  7m    36s   28  default-scheduler     Warning   FailedScheduling  No nodes are available that match all of the following predicates:: PodFitsHostPorts (1), PodToleratesNodeTaints (1).

the deploy yaml like this :

kubectl get deploy demo -n oliver -oyaml

apiVersion: v1
items:
- apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    annotations:
      deployment.kubernetes.io/revision: "3"
      name: demo
    creationTimestamp: 2017-07-22T10:02:34Z
    generation: 12
    labels:
      name: demo
    name: demo
    namespace: oliver
    resourceVersion: "57700"
    selfLink: /apis/extensions/v1beta1/namespaces/oliver/deployments/demo
    uid: e1df583c-6ec4-11e7-aa13-0800278ff542
  spec:
    replicas: 2
    selector:
      matchLabels:
        name: demo
    strategy:
      rollingUpdate:
        maxSurge: 120%
        maxUnavailable: 20%
      type: RollingUpdate
    template:
      metadata:
        annotations:
          name: demo
        creationTimestamp: null
        labels:
          name: demo
        name: demo
        namespace: oliver
      spec:
        containers:
        - image: hub.mini-paas.io/test:v1
          imagePullPolicy: IfNotPresent
          name: demo
          ports:
          - containerPort: 8080
            hostPort: 8080
            protocol: TCP
          resources:
            limits:
              cpu: 200m
              memory: 512Mi
            requests:
              cpu: 200m
              memory: 512Mi
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /opt/name
            name: web
            subPath: name
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        schedulerName: default-scheduler
        securityContext: {}
        terminationGracePeriodSeconds: 30
        volumes:
        - configMap:
            defaultMode: 420
            items:
            - key: name
              path: name
            name: web
          name: web
  status:
    availableReplicas: 1
    conditions:
    - lastTransitionTime: 2017-07-22T11:03:01Z
      lastUpdateTime: 2017-07-22T11:03:01Z
      message: Deployment does not have minimum availability.
      reason: MinimumReplicasUnavailable
      status: "False"
      type: Available
    observedGeneration: 12
    readyReplicas: 1
    replicas: 2
    unavailableReplicas: 1
    updatedReplicas: 2
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
@xiangpengzhao

This comment has been minimized.

Show comment
Hide comment
@xiangpengzhao

xiangpengzhao Jul 22, 2017

Member

Note that you have only one node and its hostport 8080 has been used by your first running pod(container). The second pod also wants this hostport, then it fails to be scheduled.

/cc @davidopp @k82cn

Member

xiangpengzhao commented Jul 22, 2017

Note that you have only one node and its hostport 8080 has been used by your first running pod(container). The second pod also wants this hostport, then it fails to be scheduled.

/cc @davidopp @k82cn

@gogeof

This comment has been minimized.

Show comment
Hide comment
@gogeof

gogeof Jul 23, 2017

I also have this problem, but my confuse is that why create a node, but user's pods can't use it.

  • kubectl get nodes:
[root@localhost ~]# kubectl get nodes
NAME                    STATUS    AGE       VERSION
localhost.localdomain   Ready     25m       1.7.0
[root@localhost ~]#
  • kubectl describe pod
  • key massage:
No nodes are available that match all of the following predicates:: PodToleratesNodeTaints (1).
[root@localhost ~]# kubectl describe pod calico-policy-controller-3912429210-2vhss --namespace=kube-system
Name:		calico-policy-controller-3912429210-2vhss
Namespace:	kube-system
Node:		<none>
Labels:		k8s-app=calico-policy-controller
		pod-template-hash=3912429210
Annotations:	kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"calico-policy-controller-3912429210","uid":"30e5bf06-6f69-11...
		scheduler.alpha.kubernetes.io/critical-pod=
		scheduler.alpha.kubernetes.io/tolerations=[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
 {"key":"CriticalAddonsOnly", "operator":"Exists"}]

Status:		Pending
IP:		
Created By:	ReplicaSet/calico-policy-controller-3912429210
Controlled By:	ReplicaSet/calico-policy-controller-3912429210
Containers:
  calico-policy-controller:
    Image:	calico/kube-policy-controller:v0.5.2
    Port:	<none>
    Environment:
      ETCD_ENDPOINTS:		<set to the key 'etcd_endpoints' of config map 'calico-config'>	Optional: false
      K8S_API:			https://kubernetes.default:443
      CONFIGURE_ETC_HOSTS:	true
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5w43k (ro)
Conditions:
  Type		Status
  PodScheduled 	False 
Volumes:
  default-token-5w43k:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-5w43k
    Optional:	false
QoS Class:	BestEffort
Node-Selectors:	<none>
Tolerations:	node.alpha.kubernetes.io/notReady:NoExecute for 300s
		node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
  FirstSeen	LastSeen	Count	From			SubObjectPath	Type		Reason			Message
  ---------	--------	-----	----			-------------	--------	------			-------
  8m		11s		35	default-scheduler			Warning		FailedScheduling	No nodes are available that match all of the following predicates:: PodToleratesNodeTaints (1).

gogeof commented Jul 23, 2017

I also have this problem, but my confuse is that why create a node, but user's pods can't use it.

  • kubectl get nodes:
[root@localhost ~]# kubectl get nodes
NAME                    STATUS    AGE       VERSION
localhost.localdomain   Ready     25m       1.7.0
[root@localhost ~]#
  • kubectl describe pod
  • key massage:
No nodes are available that match all of the following predicates:: PodToleratesNodeTaints (1).
[root@localhost ~]# kubectl describe pod calico-policy-controller-3912429210-2vhss --namespace=kube-system
Name:		calico-policy-controller-3912429210-2vhss
Namespace:	kube-system
Node:		<none>
Labels:		k8s-app=calico-policy-controller
		pod-template-hash=3912429210
Annotations:	kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"calico-policy-controller-3912429210","uid":"30e5bf06-6f69-11...
		scheduler.alpha.kubernetes.io/critical-pod=
		scheduler.alpha.kubernetes.io/tolerations=[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
 {"key":"CriticalAddonsOnly", "operator":"Exists"}]

Status:		Pending
IP:		
Created By:	ReplicaSet/calico-policy-controller-3912429210
Controlled By:	ReplicaSet/calico-policy-controller-3912429210
Containers:
  calico-policy-controller:
    Image:	calico/kube-policy-controller:v0.5.2
    Port:	<none>
    Environment:
      ETCD_ENDPOINTS:		<set to the key 'etcd_endpoints' of config map 'calico-config'>	Optional: false
      K8S_API:			https://kubernetes.default:443
      CONFIGURE_ETC_HOSTS:	true
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5w43k (ro)
Conditions:
  Type		Status
  PodScheduled 	False 
Volumes:
  default-token-5w43k:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-5w43k
    Optional:	false
QoS Class:	BestEffort
Node-Selectors:	<none>
Tolerations:	node.alpha.kubernetes.io/notReady:NoExecute for 300s
		node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
  FirstSeen	LastSeen	Count	From			SubObjectPath	Type		Reason			Message
  ---------	--------	-----	----			-------------	--------	------			-------
  8m		11s		35	default-scheduler			Warning		FailedScheduling	No nodes are available that match all of the following predicates:: PodToleratesNodeTaints (1).
@k82cn

This comment has been minimized.

Show comment
Hide comment
@k82cn

k82cn Jul 24, 2017

Member
Member

k82cn commented Jul 24, 2017

@huangjiasingle

This comment has been minimized.

Show comment
Hide comment
@huangjiasingle

huangjiasingle Jul 24, 2017

@xiangpengzhao thank u! l know the reason.l will close it.

huangjiasingle commented Jul 24, 2017

@xiangpengzhao thank u! l know the reason.l will close it.

@huangjiasingle

This comment has been minimized.

Show comment
Hide comment
@huangjiasingle

huangjiasingle Jul 24, 2017

@xiangpengzhao @k82cn the PodFitsHostPorts error, l understand it. but PodToleratesNodeTaints error , l can't know it , l read the code’ s of PodToleratesNodeTaints func , it liken this :

// PodToleratesNodeTaints checks if a pod tolertaions can tolerate the node taints
func PodToleratesNodeTaints(pod *v1.Pod, meta interface{}, nodeInfo *schedulercache.NodeInfo) (bool, []algorithm.PredicateFailureReason, error) {
	return podToleratesNodeTaints(pod, nodeInfo, func(t *v1.Taint) bool {
		// PodToleratesNodeTaints is only interested in NoSchedule and NoExecute taints.
		return t.Effect == v1.TaintEffectNoSchedule || t.Effect == v1.TaintEffectNoExecute
	})
}


func podToleratesNodeTaints(pod *v1.Pod, nodeInfo *schedulercache.NodeInfo, filter func(t *v1.Taint) bool) (bool, []algorithm.PredicateFailureReason, error) {
	taints, err := nodeInfo.Taints()
	if err != nil {
		return false, nil, err
	}

	if v1helper.TolerationsTolerateTaintsWithFilter(pod.Spec.Tolerations, taints, filter) {
		return true, nil, nil
	}
	return false, []algorithm.PredicateFailureReason{ErrTaintsTolerationsNotMatch}, nil
}

// TolerationsTolerateTaintsWithFilter checks if given tolerations tolerates
// all the taints that apply to the filter in given taint list.
func TolerationsTolerateTaintsWithFilter(tolerations []v1.Toleration, taints []v1.Taint, applyFilter taintsFilterFunc) bool {
	if len(taints) == 0 {
		return true
	}

	for i := range taints {
		if applyFilter != nil && !applyFilter(&taints[i]) {
			continue
		}

		if !TolerationsTolerateTaint(tolerations, &taints[i]) {
			return false
		}
	}

	return true
}

because the node's Taints is none , the code :

if v1helper.TolerationsTolerateTaintsWithFilter(pod.Spec.Tolerations, taints, filter) {
		return true, nil, nil
	}

will be exec , it does't exec :

return false, []algorithm.PredicateFailureReason{ErrTaintsTolerationsNotMatch}

so , the PodToleratesNodeTaints error shouldn't return.

huangjiasingle commented Jul 24, 2017

@xiangpengzhao @k82cn the PodFitsHostPorts error, l understand it. but PodToleratesNodeTaints error , l can't know it , l read the code’ s of PodToleratesNodeTaints func , it liken this :

// PodToleratesNodeTaints checks if a pod tolertaions can tolerate the node taints
func PodToleratesNodeTaints(pod *v1.Pod, meta interface{}, nodeInfo *schedulercache.NodeInfo) (bool, []algorithm.PredicateFailureReason, error) {
	return podToleratesNodeTaints(pod, nodeInfo, func(t *v1.Taint) bool {
		// PodToleratesNodeTaints is only interested in NoSchedule and NoExecute taints.
		return t.Effect == v1.TaintEffectNoSchedule || t.Effect == v1.TaintEffectNoExecute
	})
}


func podToleratesNodeTaints(pod *v1.Pod, nodeInfo *schedulercache.NodeInfo, filter func(t *v1.Taint) bool) (bool, []algorithm.PredicateFailureReason, error) {
	taints, err := nodeInfo.Taints()
	if err != nil {
		return false, nil, err
	}

	if v1helper.TolerationsTolerateTaintsWithFilter(pod.Spec.Tolerations, taints, filter) {
		return true, nil, nil
	}
	return false, []algorithm.PredicateFailureReason{ErrTaintsTolerationsNotMatch}, nil
}

// TolerationsTolerateTaintsWithFilter checks if given tolerations tolerates
// all the taints that apply to the filter in given taint list.
func TolerationsTolerateTaintsWithFilter(tolerations []v1.Toleration, taints []v1.Taint, applyFilter taintsFilterFunc) bool {
	if len(taints) == 0 {
		return true
	}

	for i := range taints {
		if applyFilter != nil && !applyFilter(&taints[i]) {
			continue
		}

		if !TolerationsTolerateTaint(tolerations, &taints[i]) {
			return false
		}
	}

	return true
}

because the node's Taints is none , the code :

if v1helper.TolerationsTolerateTaintsWithFilter(pod.Spec.Tolerations, taints, filter) {
		return true, nil, nil
	}

will be exec , it does't exec :

return false, []algorithm.PredicateFailureReason{ErrTaintsTolerationsNotMatch}

so , the PodToleratesNodeTaints error shouldn't return.

@gogeof

This comment has been minimized.

Show comment
Hide comment
@gogeof

gogeof Jul 24, 2017

You're right:

  • kubectl describe node bogon(my node name)
Taints:			node-role.kubernetes.io/master:NoSchedule
  • that is my problem.
    And I want to deploy pod in master node, I should run this command in master node:
kubectl taint nodes <nodeName> node-role.kubernetes.io/master:NoSchedule-

gogeof commented Jul 24, 2017

You're right:

  • kubectl describe node bogon(my node name)
Taints:			node-role.kubernetes.io/master:NoSchedule
  • that is my problem.
    And I want to deploy pod in master node, I should run this command in master node:
kubectl taint nodes <nodeName> node-role.kubernetes.io/master:NoSchedule-
@wanghaoran1988

This comment has been minimized.

Show comment
Hide comment
@wanghaoran1988

wanghaoran1988 Jul 28, 2017

Contributor

Close, feel free to reopen if you have other concerns.

Contributor

wanghaoran1988 commented Jul 28, 2017

Close, feel free to reopen if you have other concerns.

@wanghaoran1988

This comment has been minimized.

Show comment
Hide comment
@wanghaoran1988

wanghaoran1988 Jul 28, 2017

Contributor

/close

Contributor

wanghaoran1988 commented Jul 28, 2017

/close

@k8s-ci-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-ci-robot

k8s-ci-robot Jul 28, 2017

Contributor

@wanghaoran1988: you can't close an issue/PR unless you authored it or you are assigned to it.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Contributor

k8s-ci-robot commented Jul 28, 2017

@wanghaoran1988: you can't close an issue/PR unless you authored it or you are assigned to it.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@wanghaoran1988

This comment has been minimized.

Show comment
Hide comment
@wanghaoran1988

wanghaoran1988 Jul 28, 2017

Contributor

/assign

Contributor

wanghaoran1988 commented Jul 28, 2017

/assign

@wanghaoran1988

This comment has been minimized.

Show comment
Hide comment
@wanghaoran1988

wanghaoran1988 Jul 28, 2017

Contributor

/close

Contributor

wanghaoran1988 commented Jul 28, 2017

/close

@huangjiasingle

This comment has been minimized.

Show comment
Hide comment
@huangjiasingle

huangjiasingle commented Aug 1, 2017

@wanghaoran1988 ........

@wanghaoran1988

This comment has been minimized.

Show comment
Hide comment
@wanghaoran1988

wanghaoran1988 Aug 1, 2017

Contributor

@huangjiasingle Anything I can help you? ErrTaintsTolerationsNotMatch means your pods cannot tolerate the taints your node have.

Contributor

wanghaoran1988 commented Aug 1, 2017

@huangjiasingle Anything I can help you? ErrTaintsTolerationsNotMatch means your pods cannot tolerate the taints your node have.

@tiny1990

This comment has been minimized.

Show comment
Hide comment
@tiny1990

tiny1990 Feb 21, 2018

work for me

tiny1990 commented Feb 21, 2018

work for me

@Andrewpqc

This comment has been minimized.

Show comment
Hide comment
@Andrewpqc

Andrewpqc commented Apr 24, 2018

/close

@jibinjohnbabu

This comment has been minimized.

Show comment
Hide comment
@jibinjohnbabu

jibinjohnbabu Jul 10, 2018

Frankly I don't understand any of the resolution give here , I'm facing the below error in some of my pods when I try to do an ELK deployment in my kubernetes cluster

No nodes are available that match all of the following predicates:: MatchNodeSelector (1), PodFitsHostPorts (5), PodToleratesNodeTaints (1).

can anyone help ?

jibinjohnbabu commented Jul 10, 2018

Frankly I don't understand any of the resolution give here , I'm facing the below error in some of my pods when I try to do an ELK deployment in my kubernetes cluster

No nodes are available that match all of the following predicates:: MatchNodeSelector (1), PodFitsHostPorts (5), PodToleratesNodeTaints (1).

can anyone help ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment