Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pod Readiness confusion in Troubleshooting Deployments guide #259

Open
nickperry opened this issue Dec 12, 2019 · 3 comments
Open

Pod Readiness confusion in Troubleshooting Deployments guide #259

nickperry opened this issue Dec 12, 2019 · 3 comments

Comments

@nickperry
Copy link

Throughout https://learnk8s.io/troubleshooting-deployments there seems to be some confusion between the containers of a pod being ready and the pod itself being ready.

It is not possible to determine that a pod is ready from the default kubectl get pods output - only whether it is Running and how many of its containers are ready.

It is (unfortunately) possible for all of the containers in a pod to be Ready but the the Pod itself not to be Ready.

This is an important distinction and alters the fault finding flow.

You can see if a pod is ready in the Conditions section of kubectl describe pods...

Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True

@danielepolencic
Copy link
Contributor

It is (unfortunately) possible for all of the containers in a pod to be Ready but the Pod itself not to be Ready.

Could you offer an example of this?

While I understand the points about multiple containers being (not) ready, I struggle to think of a scenario where all containers are Ready but the Pod isn't.

I think the diagram doesn't do a stellar job of explaining that you could have multiple containers inside a pod and some of them could be broken.
In fact, we don't have:

  • kubectl logs -c to select a specific container or kubectl logs --all-containers
  • kubectl exec -c
  • init containers

Those were left out on purpose as the aim of the diagram was to target newcomers. However, we're not against including more branches.

@nickperry
Copy link
Author

nickperry commented Dec 19, 2019

Sure. Here are a couple of examples of when you would have running pods with ready containers but the pods would not be marked as ready:

kubernetes/kubernetes#80968
kubernetes/kubernetes#84931

We had a serious production outage due to this a couple of weeks ago. Pods all running fine, but no endpoints for services.

I guess it depends if you want your fault finding flow to assume perfect control plane behaviour or not. Most monitoring products make the same assumption unfortunately.

@cjroebuck
Copy link

Just ran into this issue too in production!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants