-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add pod lifecycle intervals to separate pages #26908
add pod lifecycle intervals to separate pages #26908
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: deads2k The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
42e3198
to
de49307
Compare
/retest |
1 similar comment
/retest |
In this job from this PR, searched for: "ns/openshift-etcd pod/revision-pruner-7-ci-op-33pz4yh4-2a78c-c8x64-master-1 uid/0f5d9416-976b-4825-953b-084df247fffc" and saw:
The |
I looked at this one (de49307):
|
here's another one (using CI for de49307):
|
if (m[2] == "ContainerStart") { | ||
return [item.locator, ` (container lifecycle)`, "ContainerStart"]; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we add a case for ContainerExit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we add a case for ContainerExit?
container exit shouldn't have an interval because that's the absence of an interval, right?
continue | ||
} | ||
annotationTokens := strings.Split(curr, "/") | ||
annotations[annotationTokens[0]] = annotationTokens[1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
small savings but can we just do this:
if annotationTokens[0] == "reason" {
return annotationTokens[1]
}
though I'm not sure what to return if you didn't find "reason" in the tokens.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
small savings but can we just do this:
if annotationTokens[0] == "reason" {
return annotationTokens[1]
}
though I'm not sure what to return if you didn't find "reason" in the tokens.
I'd like to keep the logic that produces the annotations because I suspect we will need it again.
/retest |
/test all |
@DennisPeriquet ok, worked it out. Those are terminate before the test starts. I can fix the times, but they still won't show up on the chart. |
podCoordinates := monitorapi.PodFrom(inLocator) | ||
|
||
// no hit for deleted, but if it's a RunOnce pod with all terminated containers, the logical "this pod is over" | ||
// happens when the last container is terminated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can see how this comment is relevant on lines 172-173 but not sure how it's relevant here.
if !ok { | ||
return t.delegate.getEndTime(locator) | ||
} | ||
for i := len(containerEvents) - 1; i >= 0; i-- { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why did you choose to walk the containerEvents
from the end like this vs. using for ... range
like in line 141?
/retest |
Cool. I did a check on the junit/jsons on a jobrun from 122151c and didn't find any cases where the |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
6 similar comments
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/test all |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
7 similar comments
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
On the two required (and failing) jobs, I'm seeing a lot of:
|
/lgtm cancel |
fix identified in a different PR by @DennisPeriquet relabelling |
/test all |
/skip |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
1 similar comment
/retest-required Please review the full test history for this PR and help us cut down flakes. |
failed on "pods should successfully create sandboxes by other" /override ci/prow/e2e-aws-fips |
@deads2k: Overrode contexts on behalf of deads2k: ci/prow/e2e-aws-fips, ci/prow/e2e-gcp In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@deads2k: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
This may be the final edition. It sorts the pod lifecycle on the intervals chart by namespace.
There are still lots and lots of them, but you can see it ripple out by namespace if this works.
The openshift-tests artifact directory contains multiple intervals charts for pods in particular namespaces. This allows for determining which pods were running under what conditions. Keep in mind that readiness here reflects the container status, not the success or failure of an individual check.