New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vendor cleanup #2124
Vendor cleanup #2124
Conversation
Hi @vrothberg. Thanks for your PR. I'm waiting for a kubernetes-sigs or kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The CI seems to go wild on other PRs as well. However, Travis seems happy with the vendor changes. I'd appreciate if we can this PR in before the others to avoid potentially very hairy rebase conflicts. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The changes LGTM, best of luck getting Travis happy.
Thanks, Tom! This PR was a bit of a challenge as so many things had to be touched. |
51f07a9
to
26140d1
Compare
/hold |
0612c65
to
f9aec36
Compare
One step further. We're now hitting a runtime error caused by a conflict between go-criu and run: |
04bfdac
to
2bb143a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 This cleanup is very great and important in my POV, so I would like to push the topic forward. There are just some small comments/suggestions from my side. Is it still on hold?
@@ -9,12 +9,16 @@ import ( | |||
) | |||
|
|||
func (c *ContainerServer) addSandboxPlatform(sb *sandbox.Sandbox) { | |||
c.state.processLevels[selinux.NewContext(sb.ProcessLabel())["level"]]++ | |||
// NewContext() always returns nil, so we can safely ignore the error | |||
ctx, _ := selinux.NewContext(sb.ProcessLabel()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would still go for the error handling here since I have the feeling that it will be missed when the day comes that selinux.NewContext
returns a real error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, from a maintainability perspective we should catch if there ever becomes an error
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that @rhatdan wants to get rid of those entirely by letting containers/storage handle all label issue but I can update the code now to be on the safe side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've had a look at the call stack and don't think it's worth the effort. We had to change a whole bunch of APIs up to the ContainerServer which does not really implement error handling (even for {Add,Remove}Container).
I agree that errors shouldn't be ignored but given this affects large parts of the API, I prefer to defer this to a separate PR.
} | ||
|
||
func (c *ContainerServer) removeSandboxPlatform(sb *sandbox.Sandbox) { | ||
processLabel := sb.ProcessLabel() | ||
level := selinux.NewContext(processLabel)["level"] | ||
// NewContext() always returns nil, so we can safely ignore the error | ||
ctx, _ := selinux.NewContext(processLabel) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The same for here. Another option would be to revert the selinux-go related changes and put it in a separate PR like this: #2127 WDYT? (If you disagree I will close my PR)
Thanks. Yes, it's currently on hold for #2123. Regarding updating more deps: I prefer deferring that to another PR. This one is quite big already and I consider the first step of a series of commits. Once that's in, the deps are stable and CI will catch regressions. |
Resuming work. May God have mercy on me while rebasing. |
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
/test kata-containers |
1 similar comment
/test kata-containers |
@mrunalp any idea what's going on? Kata seems to go south on other PRs as well. |
Lets leave the SELinux issues to the other PR that @umohnani8 is working on. It is close. LGTM |
/test kata-containers |
LGTM, awesome cleanup! |
/retest |
/test kata-containers |
@sboeuf, do you know if the kata job has a hiccup at the moment? Other PRs show similar issues. |
/cc @chavafg |
@sboeuf: GitHub didn't allow me to request PR reviews from the following users: chavafg. Note that only kubernetes-sigs members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
the error looks pretty weird...
Has something change in the way the cri-o tests are run? It is looking for the docker daemon to be running, which we actually stop it before running the tests. https://github.com/kata-containers/tests/blob/master/integration/cri-o/cri-o.sh#L95-L100 |
Not to my knowledge. @saschagrunert, as you're the test master, do you have a suspicion? |
On the cri-o side nothing should have changed. @chavafg could the issue related to that part of the logs: 14:58:51 Set to test rootfs image
14:58:51 Add kata-runtime as a new/default Docker runtime.
14:58:51 manage_ctr_mgr.sh - INFO: docker_info: version: 18.06.3-ce
14:58:51 manage_ctr_mgr.sh - INFO: docker_info: default runtime: runc
14:58:51 manage_ctr_mgr.sh - INFO: docker_info: package name: docker-ce
14:58:52 <13>Mar 26 14:58:52 manage_ctr_mgr.sh: configure_docker: configuring kata-runtime as default docker runtime
14:58:52 ls: cannot access '/etc/systemd/system/docker.service.d/': No such file or directory
14:58:52 ls: cannot access '/etc/systemd/system/docker.service.d/': No such file or directory
14:58:52 Stopping the docker service
14:58:53 Removing /var/lib/docker
14:58:53 Changing docker service configuration
14:58:53 [Service]
14:58:53 Environment=""
14:58:53 Environment=""
14:58:53 ExecStart=
14:58:53 ExecStart=/usr/bin/dockerd -D --add-runtime kata-runtime=/usr/local/bin/kata-runtime --default-runtime=kata-runtime --storage-driver=overlay2
14:58:53 Reloading unit files and starting docker service
14:59:10 Script finished I think the docker service does not start correctly but is listed as systemd state |
sorry, found the issue. One of our functions in |
Thanks @chavafg |
/retest |
Thanks a lot folks :) |
LGTM @umohnani8 @mrunalp PTAL |
/hold cancel |
/lgtm |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: giuseppe, vrothberg The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This PR cleans up the vendored dependencies by:
make vendor
to always fetch the latest version ofvndr
to avoid divergencevndr
which have been forgotten in commits prior to this PRhack/tree_status.sh
file to check the status of the local git treemake vendor
followed by./hack/tree_status.sh