Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ContainersReady condition into Pod Status #64646

Merged
merged 3 commits into from Jun 6, 2018

Conversation

@freehan
Copy link
Member

freehan commented Jun 2, 2018

Last 3 commits are new

Follow up PR of: #64057 and #64344

Have a single PR for adding ContainersReady per #64344 (comment)

Introduce ContainersReady condition in Pod Status

/assign yujuhong for review
/assign thockin for the tiny API change

@dixudx
Copy link
Member

dixudx left a comment

Mostly lgtm. Some nits.

Moreover, lastTransitionTime is missing for new PodConditionType ContainersReady. You have to
add updateLastTransitionTime(&status, &oldStatus, v1.ContainersReady) in method updateStatusInternal (pkg/kubelet/status/status_manager.go).

if conditions == nil {
return -1, nil
}
for i := range conditions {

This comment has been minimized.

@dixudx

dixudx Jun 2, 2018

Member

combine L266 with L269? So we can have one return -1, nil.

// all conditions specified in the readiness gates have status equal to "True"
// More info: https://github.com/kubernetes/community/blob/master/keps/sig-network/0007-pod-ready%2B%2B.md
// +optional
repeated PodReadinessGate readinessGates = 28;

This comment has been minimized.

@dixudx

dixudx Jun 2, 2018

Member

change to 27?

// all conditions specified in the readiness gates have status equal to "True"
// More info: https://github.com/kubernetes/community/blob/master/keps/sig-network/0007-pod-ready%2B%2B.md
// +optional
ReadinessGates []PodReadinessGate `json:"readinessGates,omitempty" protobuf:"bytes,28,opt,name=readinessGates"`

This comment has been minimized.

@dixudx

dixudx Jun 2, 2018

Member

change 28 to 27?

This comment has been minimized.

@freehan

freehan Jun 4, 2018

Author Member

27 is already taken

@@ -1407,6 +1408,11 @@ func (kl *Kubelet) convertStatusToAPIStatus(pod *v1.Pod, podStatus *kubecontaine
true,
)

for _, c := range pod.Status.Conditions {

This comment has been minimized.

@dixudx

dixudx Jun 2, 2018

Member

Better add a comment for this.

Like, "preserve non-system condition types". WDYT?

This comment has been minimized.

@freehan

freehan Jun 4, 2018

Author Member

sure

@yujuhong
Copy link
Member

yujuhong left a comment

@freehan could you remove the readiness gate commits from this PR?

return GetPodConditionFromList(status.Conditions, conditionType)
}

// GetPodConditionFromList extracts the provided condition from the given list of condition and returns that.

This comment has been minimized.

@yujuhong

yujuhong Jun 4, 2018

Member

s/and returns that./and returns the index of the condition and the condition.

}

// GetPodConditionFromList extracts the provided condition from the given list of condition and returns that.
// Returns nil and -1 if the condition is not present, and the index of the located condition.

This comment has been minimized.

@yujuhong

yujuhong Jun 4, 2018

Member

s/Returns nil and -1/Returns -1 and nil
Remove , and the index of the located condition.

@@ -2068,6 +2068,8 @@ const (
// PodReasonUnschedulable reason in PodScheduled PodCondition means that the scheduler
// can't schedule the pod right now, for example due to insufficient resources in the cluster.
PodReasonUnschedulable = "Unschedulable"
// ContainersReady indicates whether all containers in the pod are ready

This comment has been minimized.

@yujuhong

yujuhong Jun 4, 2018

Member

nit: end the comment with a period (.)

@@ -2409,6 +2411,12 @@ const (
TolerationOpEqual TolerationOperator = "Equal"
)

// PodReadinessGate contains the reference to a pod condition

This comment has been minimized.

@yujuhong

yujuhong Jun 4, 2018

Member

Not sure why the add ReadinessGates in pod spec commit is in this PR...

This comment has been minimized.

@freehan

freehan Jun 4, 2018

Author Member

Because it was not merged at the time...

@freehan freehan force-pushed the freehan:pod-ready-plus2-new branch from c4158c0 to 2d8695b Jun 4, 2018

@k8s-ci-robot k8s-ci-robot added size/L and removed size/XXL labels Jun 4, 2018

@freehan freehan force-pushed the freehan:pod-ready-plus2-new branch from 2d8695b to 3d868b8 Jun 4, 2018

@freehan

This comment has been minimized.

Copy link
Member Author

freehan commented Jun 4, 2018

/assign thockin for approval

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Jun 4, 2018

@freehan: GitHub didn't allow me to assign the following users: for, approval.

Note that only kubernetes members and repo collaborators can be assigned.

In response to this:

/assign thockin for approval

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@yujuhong
Copy link
Member

yujuhong left a comment

Will review after the dependency is merged. Adding this to prevent the PR from merging.

@thockin

This comment has been minimized.

Copy link
Member

thockin commented Jun 4, 2018

Thanks @freehan and @yujuhong

@jberkus

This comment has been minimized.

Copy link

jberkus commented Jun 5, 2018

So, is this PR required if the other PodReady PRs merge?

You need some more labels here:

/sig network
/priority important-soon

freehan added some commits May 25, 2018

@freehan freehan force-pushed the freehan:pod-ready-plus2-new branch from 3d868b8 to 370268f Jun 5, 2018

@freehan

This comment has been minimized.

Copy link
Member Author

freehan commented Jun 5, 2018

So, is this PR required if the other PodReady PRs merge?

Yes. This PR completes the proposal.
@jberkus

@colemickens

This comment has been minimized.

Copy link
Contributor

colemickens commented Jun 5, 2018

Hey folks, from the comments, it sounds like this is desired for 1.11, but it's not labeled correctly for that to happen, and there are additional labels needed if it doesn't merge today before the beginning of code freeze. Please ensure the labels for the milestone and urgency are set correctly.

@jberkus

This comment has been minimized.

Copy link

jberkus commented Jun 5, 2018

Going to bump to critical-urgent based on @freehan comment

/priority critical-urgent

@yujuhong
Copy link
Member

yujuhong left a comment

/lgtm

@thockin

This comment has been minimized.

Copy link
Member

thockin commented Jun 5, 2018

/approve

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Jun 5, 2018

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: freehan, thockin, yujuhong

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-github-robot

This comment has been minimized.

Copy link
Contributor

k8s-github-robot commented Jun 6, 2018

[MILESTONENOTIFIER] Milestone Pull Request: Up-to-date for process

@freehan @thockin @yujuhong

Pull Request Labels
  • sig/network: Pull Request will be escalated to these SIGs if needed.
  • priority/critical-urgent: Never automatically move pull request out of a release milestone; continually escalate to contributor and SIG through all available channels.
  • kind/feature: New functionality.
Help
@k8s-github-robot

This comment has been minimized.

Copy link
Contributor

k8s-github-robot commented Jun 6, 2018

/test all [submit-queue is verifying that this PR is safe to merge]

@k8s-github-robot

This comment has been minimized.

Copy link
Contributor

k8s-github-robot commented Jun 6, 2018

Automatic merge from submit-queue (batch tested with PRs 63717, 64646, 64792, 64784, 64800). If you want to cherry-pick this change to another branch, please follow the instructions here.

@k8s-github-robot k8s-github-robot merged commit 0b8394a into kubernetes:master Jun 6, 2018

10 of 18 checks passed

Submit Queue Required Github CI test is not green: pull-kubernetes-e2e-gce
Details
pull-kubernetes-e2e-gce Job triggered.
Details
pull-kubernetes-e2e-gce-100-performance Job triggered.
Details
pull-kubernetes-e2e-gce-device-plugin-gpu Job triggered.
Details
pull-kubernetes-e2e-kops-aws Job triggered.
Details
pull-kubernetes-kubemark-e2e-gce Job triggered.
Details
pull-kubernetes-kubemark-e2e-gce-big Job triggered.
Details
pull-kubernetes-verify Job triggered.
Details
cla/linuxfoundation freehan authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-cross Skipped
pull-kubernetes-e2e-gke Skipped
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-local-e2e Skipped
pull-kubernetes-local-e2e-containerized Job succeeded.
Details
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Jun 6, 2018

@freehan: The following test failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-e2e-kops-aws 370268f link /test pull-kubernetes-e2e-kops-aws

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@zparnold

This comment has been minimized.

Copy link
Member

zparnold commented Jun 13, 2018

Hello there! @freehan I'm Zach Arnold working on Docs for the 1.11 release. This PR was identified as one needing some documentation in the https://github.com/kubernetes/website repo around your contributions (thanks by the way!) When you have some time, could you please modify/add/remove the relevant content that needs changing in our documentation repo? Thanks! Please let me or my colleague Misty know (@zparnold/@Misty on K8s Slack) if you need any assistance with the documentation.

@tengqm tengqm referenced this pull request Jun 16, 2018

Closed

Document the ContainersReady condition in Pod Status #9099

0 of 1 task complete
@lucperkins

This comment has been minimized.

Copy link

lucperkins commented Jun 22, 2018

@freehan Please note that I've created a PR for this in the website repo: kubernetes/website#9197.

Please let me know if there'is any usage-related info that I should be aware of. Thus far I've added it to the list of possible Pod statuses but would love to add more than that.

k8s-github-robot pushed a commit that referenced this pull request Sep 9, 2018

Kubernetes Submit Queue
Merge pull request #64867 from dixudx/missing_container_ready_ltt
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.

add missing LastTransitionTime of ContainerReady condition

**What this PR does / why we need it**:
add missing LastTransitionTime of ContainerReady condition

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
xref #64646

**Special notes for your reviewer**:
/cc freehan yujuhong

**Release note**:

```release-note
add missing LastTransitionTime of ContainerReady condition
```
@timchenxiaoyu

This comment has been minimized.

Copy link
Contributor

timchenxiaoyu commented Nov 22, 2018

we hava pod container ready condition,i don't know, why we need ContainersReady?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.