Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

all pod of one node: pod running but Conditions:Ready :False,so endpoints is none #84979

Closed
mengqiuyuwl opened this issue Nov 8, 2019 · 6 comments

Comments

@mengqiuyuwl
Copy link

@mengqiuyuwl mengqiuyuwl commented Nov 8, 2019

What happened:
reboot one node
What you expected to happen:
all pods are ready
How to reproduce it (as minimally and precisely as possible):
reboot vm
Anything else we need to know?:
k8s cluster: two master three node
appearance:
kubectl get pod -o wide|grep oasis-ui-admin
oasis-ui-admin-7cd99ff45d-dt98g 1/1 Running 0 28h 172.34.70.27 192.168.56.54
[root@centos54 ~]# kubectl get ep|grep oasis-ui-admin
oasis-ui-admin 28h
[root@centos54 ~]#

kubectl describe pod oasis-ui-admin-7cd99ff45d-dt98g
Name: oasis-ui-admin-7cd99ff45d-dt98g
Namespace: default
Priority: 0
Node: 192.168.56.54/192.168.56.54
Start Time: Thu, 07 Nov 2019 13:07:54 +0800
Labels: feature=oasis_base
name=oasis-ui-admin
pod-template-hash=7cd99ff45d
Annotations:
Status: Running
IP: 172.34.70.27
Controlled By: ReplicaSet/oasis-ui-admin-7cd99ff45d
Containers:
oasis-ui-admin:
Container ID: docker://320b3dffa5a4d0e5c69afe809bcf02a54da633bb69bfeb8e2d391e57ca088cc0
Image: h3crd-wlan1.chinacloudapp.cn:5000/buildonly/oasis-ui-admin:R10.0.0.10.0.0_20191021103414
Image ID: docker-pullable://h3crd-wlan1.chinacloudapp.cn:5000/buildonly/oasis-ui-admin@sha256:d2b919c04b2ffa98e3e426f4aac6e55f69f49844821f05264713c43f7823fe01
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 07 Nov 2019 15:44:38 +0800
Ready: True
Restart Count: 0
Limits:
cpu: 1200m
memory: 1230Mi
Requests:
cpu: 30m
memory: 200Mi
Environment:
O2O_PROFILE: release
Mounts:
/etc/localtime from systime (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-jhq92 (ro)
/workspace/logs from syslog (rw)
/workspace/src/www/ from web-packages (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady True
PodScheduled True
Volumes:
syslog:
Type: HostPath (bare host directory volume)
Path: /home/azureuser/logs
HostPathType:
systime:
Type: HostPath (bare host directory volume)
Path: /etc/localtime
HostPathType:
web-packages:
Type: HostPath (bare host directory volume)
Path: /home/h3coasis/web-packages/www/
HostPathType:
default-token-jhq92:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-jhq92
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:

Environment:

  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:50Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration:
  • OS (e.g: cat /etc/os-release):
    CentOS Linux release 7.4.1708 (Core)
  • Kernel (e.g. uname -a):
    Linux centos52 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:
    Binary
  • Network plugin and version (if this is a network-related bug):
  • Others:
    docker version 19.03.1
@mengqiuyuwl mengqiuyuwl added the kind/bug label Nov 8, 2019
@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

@k8s-ci-robot k8s-ci-robot commented Nov 8, 2019

@mengqiuyuwl: There are no sig labels on this issue. Please add a sig label by either:

  1. mentioning a sig: @kubernetes/sig-<group-name>-<group-suffix>
    e.g., @kubernetes/sig-contributor-experience-<group-suffix> to notify the contributor experience sig, OR

  2. specifying the label manually: /sig <group-name>
    e.g., /sig scalability to apply the sig/scalability label

Note: Method 1 will trigger an email to the group. See the group list.
The <group-suffix> in method 1 has to be replaced with one of these: bugs, feature-requests, pr-reviews, test-failures, proposals.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@renjian52

This comment has been minimized.

Copy link

@renjian52 renjian52 commented Nov 8, 2019

It seems same as #84931

@mengqiuyuwl

This comment has been minimized.

Copy link
Author

@mengqiuyuwl mengqiuyuwl commented Nov 9, 2019

Is this a bug?

@mengqiuyuwl

This comment has been minimized.

Copy link
Author

@mengqiuyuwl mengqiuyuwl commented Nov 9, 2019

It seems same as #84931

Is this a bug?

@neolit123

This comment has been minimized.

Copy link
Member

@neolit123 neolit123 commented Nov 9, 2019

closing as duplicate of #84931
please continue the discussion there.

/close

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

@k8s-ci-robot k8s-ci-robot commented Nov 9, 2019

@neolit123: Closing this issue.

In response to this:

closing as duplicate of #84931
please continue the discussion there.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants
You can’t perform that action at this time.