Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

temporary search for memory.max in build container during quota test #26363

Merged
merged 1 commit into from
Aug 10, 2021

Conversation

gabemontero
Copy link
Contributor

No description provided.

@gabemontero
Copy link
Contributor Author

/assign @adambkaplan

The quota test needs a bit more tweaking, as the cgroup v2 memory.max file is not exactly where the script is anticipating it to be. So I have added some find's of the file, which we can come back and remove once it is all sorted out. Once this merges, the next cgroupv2 run in openshift/builder#252 should confirm exactly if/where the memory.max file is located in the build container, and we can go back and tweak the assemble script as needed.

The last failed run there shows the current changes in openshift/origin now firing for both cgroup v1 and v2. It is just not finding either. The failing test debug also shows the util_linux.go files changes in this PR are running as expected, with the build pod's memory.max file being located and the correct memory limit getting pulled. It is just a question of where in the build container to find it. Perhaps with the build container being in a privileged pod, it is in the pod specific subdir ??? ... the find should confirm.

@gabemontero
Copy link
Contributor Author

/assign @adambkaplan

The quota test needs a bit more tweaking, as the cgroup v2 memory.max file is not exactly where the script is anticipating it to be. So I have added some find's of the file, which we can come back and remove once it is all sorted out. Once this merges, the next cgroupv2 run in openshift/builder#252 should confirm exactly if/where the memory.max file is located in the build container, and we can go back and tweak the assemble script as needed.

The last failed run there shows the current changes in openshift/origin now firing for both cgroup v1 and v2. It is just not finding either. The failing test debug also shows the util_linux.go files changes in this PR are running as expected, with the build pod's memory.max file being located and the correct memory limit getting pulled. It is just a question of where in the build container to find it. Perhaps with the build container being in a privileged pod, it is in the pod specific subdir ??? ... the find should confirm.

with @adambkaplan on PTO, can you approve @bparees ... per the details above, I need these finds short term to nail down in openshift/builder#252 where to look for the memory.max file in cgroups v2.

Unfortunately, although maybe @vrutkovs can correct me if I am wrong here, there is no way yet to launch a cgroupv2 cluster via cluster-bot.

Once this merges and we sort out the location, I'll then change this file again to go to the file in question. And if there are more changes needed in openshift/builder#252 to get the file in the expected location, we'll drive that as well. But ultimately, I believe we can remove these find's eventually.

/assign @coreydaley in case @bparees just wants to approve and wants an lgtm from someone on the team.

thanks

@bparees
Copy link
Contributor

bparees commented Aug 9, 2021

/approve

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Aug 9, 2021
@coreydaley
Copy link
Member

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Aug 9, 2021
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 9, 2021

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: bparees, coreydaley, gabemontero

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@gabemontero
Copy link
Contributor Author

/retest

@gabemontero
Copy link
Contributor Author

/skip

@gabemontero
Copy link
Contributor Author

/retest

@openshift-bot
Copy link
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

1 similar comment
@openshift-bot
Copy link
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 10, 2021

@gabemontero: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Rerun command
ci/prow/e2e-agnostic-cmd 1c0e0b5 link /test e2e-agnostic-cmd

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-bot
Copy link
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@bparees bparees added the bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. label Aug 10, 2021
@openshift-ci openshift-ci bot merged commit f4dedab into openshift:master Aug 10, 2021
@gabemontero gabemontero deleted the more-cgroup2-shell-work branch August 11, 2021 01:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants