Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

expose pod information to oas container via env variables #440

Conversation

p0lyn0mial
Copy link
Contributor

the pod name and the namespace are used by the termination code to properly record events.

before the change the termination events were recorded to the default namespace and weren't associated with any pod.
after the change, the termination evens are recorded in the openshift-apiserver namespace and are bound to oas pods.

the pod name and the namespace are used by the termination code to properly record events.

before the change the termination events were recorded to the default namespace and weren't associated with any pod.
after the change, the termination evens are recorded in the openshift-apiserver namespace and are bound to oas pods.
@p0lyn0mial
Copy link
Contributor Author

/assign @smarterclayton @sttts

@p0lyn0mial
Copy link
Contributor Author

I'm going to check oauth-apiserver as well.

@p0lyn0mial
Copy link
Contributor Author

for the record TerminationMinimalShutdownDuration is set to 0 for both oas and oauth-apiserver. This seems to be ok since we already have terminationGracePeriodSeconds (70s) set.

@p0lyn0mial
Copy link
Contributor Author

p0lyn0mial commented Mar 30, 2021

for the record TerminationMinimalShutdownDuration is set to 0 for both oas and oauth-apiserver. This seems to be ok since we already have terminationGracePeriodSeconds (70s) set.

this is wrong! terminationGracePeriodSeconds is how long kubelet waits before it terminates oas. It doesn't guarantee that it will always wait for that long.

That means our aggregated APIs are not terminated gracefully.
According to Clayton, things on pod / service network need to wait 15-20s. That means we should set shutdown-delay-duration at least to 15s. Thanks!

@p0lyn0mial
Copy link
Contributor Author

I'm going to open follow-up PRs for shutdown-delay-duration next week, after my holidays.

@sttts
Copy link
Contributor

sttts commented Apr 6, 2021

/lgtm
/approve

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Apr 6, 2021
@openshift-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: p0lyn0mial, sttts

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 6, 2021
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

4 similar comments
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-merge-robot openshift-merge-robot merged commit e24658e into openshift:master Apr 6, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants