Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix high CVEs #280

Merged
merged 7 commits into from
Dec 4, 2023
Merged

Fix high CVEs #280

merged 7 commits into from
Dec 4, 2023

Conversation

nwneisen
Copy link
Collaborator

This updates some vendor versions in order to fix CVEs. This is the latest version thaat can be used before starting to run into issues with k8s 1.28.

Proposed Changes

  • Bump k8s package to 1.27.8 and associated indirect packages
  • Associated code changes

@nwneisen
Copy link
Collaborator Author

I patched #224 on my local machine and ran integration tests. All tests are still passing after these changes.

nneisen@pop-os:~/code/cri-dockerd (bump-high-deps): make integration
sudo critest -runtime-endpoint=unix:///var/run/cri-dockerd.sock -ginkgo.skip="runtime should support apparmor|runtime should support reopening container log|runtime should support execSync with timeout|runtime should support selinux|.*should support propagation.*"
critest version: d7c0562b
Running Suite: CRI validation - /home/nneisen/code/cri-dockerd
==============================================================
Random Seed: 1701372999

Will run 74 of 86 specs
•••••••S•••••••S•SS•••••••SSSForwarding from 127.0.0.1:12002 -> 12003
Forwarding from [::1]:12002 -> 12003
Handling connection for 12002
••SSSS•••••S•••••••••Forwarding from 127.0.0.1:12001 -> 80
Forwarding from [::1]:12001 -> 80
Handling connection for 12001
•••••••••••••E1130 12:37:42.749037   93008 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to update container create config: failed to apply container security context for container \"container-with-RunAsGroup-without-RunAsUser-test-724edf13-f440-4aca-bf1f-0a496a4af6db\": runAsGroup is specified without a runAsUser" podSandboxID="01f479a1b9d5efa15cded7b50790ab915a4ab73afc2f37a5bac00099773f7b8c"
•••••••••••••••••••••••

Ran 74 of 86 Specs in 84.190 seconds
SUCCESS! -- 74 Passed | 0 Failed | 0 Pending | 12 Skipped
PASS

@nwneisen nwneisen merged commit 0d54111 into Mirantis:master Dec 4, 2023
8 checks passed
@vikramhh
Copy link

vikramhh commented Dec 5, 2023

@nwneisen - I discovered some issues with these changes. We seem to end up with different components using different versions of cri-api or other components. For example while cri-dockerd itself is still using k8s.io/cri-api/pkg/apis/runtime/v1alpha2, k8s.io/kubernetes has been bumped to v1.27.8 which is using k8s.io/cri-api/pkg/apis/runtime/v1.

Practically this is leading to broken tests (although some of the tests also broke just because the required modifications were not done and the CI for the repo does not run unit tests which is why this was possibly not detected).

Here is what I see when I run unit tests with the change:

➜  cri-dockerd git:(v0.3.8) ✗  [5/12/23 4:29:02] ~/gopath/src/github.com/Mirantis/cri-dockerd>  go test ./...                                           
# github.com/Mirantis/cri-dockerd/network/hostport [github.com/Mirantis/cri-dockerd/network/hostport.test]
network/hostport/hostport_manager_test.go:134:16: cannot use iptables (variable of type *fakeIPTables) as iptables.Interface value in struct literal: *fakeIPTables does not implement iptables.Interface (missing method Present)
network/hostport/hostport_manager_test.go:238:16: cannot use iptables (variable of type *fakeIPTables) as iptables.Interface value in struct literal: *fakeIPTables does not implement iptables.Interface (missing method Present)
network/hostport/hostport_manager_test.go:554:16: cannot use iptables (variable of type *fakeIPTables) as iptables.Interface value in struct literal: *fakeIPTables does not implement iptables.Interface (missing method Present)
network/hostport/hostport_test.go:71:45: cannot use fakeIPTables (variable of type *fakeIPTables) as iptables.Interface value in argument to ensureKubeHostportChains: *fakeIPTables does not implement iptables.Interface (missing method Present)
# k8s.io/kubernetes/pkg/kubelet/container
vendor/k8s.io/kubernetes/pkg/kubelet/container/runtime.go:125:63: undefined: runtimeapi.CheckpointContainerRequest
vendor/k8s.io/kubernetes/pkg/kubelet/container/runtime.go:127:38: undefined: runtimeapi.ContainerEventResponse
vendor/k8s.io/kubernetes/pkg/kubelet/container/runtime.go:133:60: undefined: runtimeapi.MetricDescriptor
vendor/k8s.io/kubernetes/pkg/kubelet/container/runtime.go:135:60: undefined: runtimeapi.PodSandboxMetrics
vendor/k8s.io/kubernetes/pkg/kubelet/container/helpers.go:62:61: undefined: runtimeapi.UserNamespace
?   	github.com/Mirantis/cri-dockerd	[no test files]
?   	github.com/Mirantis/cri-dockerd/backend	[no test files]
?   	github.com/Mirantis/cri-dockerd/cmd	[no test files]
?   	github.com/Mirantis/cri-dockerd/cmd/cri/options	[no test files]
?   	github.com/Mirantis/cri-dockerd/cmd/version	[no test files]
?   	github.com/Mirantis/cri-dockerd/config	[no test files]
?   	github.com/Mirantis/cri-dockerd/containermanager	[no test files]
FAIL	github.com/Mirantis/cri-dockerd/core [build failed]
# github.com/Mirantis/cri-dockerd/streaming [github.com/Mirantis/cri-dockerd/streaming.test]
streaming/request_cache_test.go:203:44: undefined: clock.FakeClock
streaming/request_cache_test.go:205:21: undefined: clock.NewFakeClock
streaming/server_test.go:392:29: cannot use rt (variable of type *fakeRuntime) as Runtime value in argument to NewServer: *fakeRuntime does not implement Runtime (wrong type for method Attach)
		have Attach(string, io.Reader, io.WriteCloser, io.WriteCloser, bool, <-chan "k8s.io/client-go/tools/remotecommand".TerminalSize) error
		want Attach(context.Context, string, io.Reader, io.WriteCloser, io.WriteCloser, bool, <-chan "k8s.io/client-go/tools/remotecommand".TerminalSize) error
?   	github.com/Mirantis/cri-dockerd/libdocker/testing	[no test files]
?   	github.com/Mirantis/cri-dockerd/metrics	[no test files]
?   	github.com/Mirantis/cri-dockerd/network/cni	[no test files]
?   	github.com/Mirantis/cri-dockerd/network/cni/testing	[no test files]
?   	github.com/Mirantis/cri-dockerd/network/kubenet	[no test files]
?   	github.com/Mirantis/cri-dockerd/network/metrics	[no test files]
ok  	github.com/Mirantis/cri-dockerd/libdocker	1.378s
?   	github.com/Mirantis/cri-dockerd/store	[no test files]
?   	github.com/Mirantis/cri-dockerd/streaming/remotecommand	[no test files]
?   	github.com/Mirantis/cri-dockerd/utils	[no test files]
?   	github.com/Mirantis/cri-dockerd/utils/errors	[no test files]
ok  	github.com/Mirantis/cri-dockerd/network	2.409s
ok  	github.com/Mirantis/cri-dockerd/network/bandwidth	1.823s
ok  	github.com/Mirantis/cri-dockerd/network/hairpin	2.861s
FAIL	github.com/Mirantis/cri-dockerd/network/hostport [build failed]
ok  	github.com/Mirantis/cri-dockerd/network/testing	2.344s
FAIL	github.com/Mirantis/cri-dockerd/streaming [build failed]
ok  	github.com/Mirantis/cri-dockerd/streaming/portforward	1.668s
FAIL

Without these changes(sync to 9c5ee55), I see only one failure which looks like it was caused by #147 and so is pre-existing for changes in this PR

➜  cri-dockerd git:(9c5ee551) ✗  [5/12/23 4:31:02] ~/gopath/src/github.com/Mirantis/cri-dockerd>  go test ./...                                            
# k8s.io/kubernetes/pkg/kubelet/container
vendor/k8s.io/kubernetes/pkg/kubelet/container/helpers.go:42:81: undefined: v1.Handler
?   	github.com/Mirantis/cri-dockerd	[no test files]
?   	github.com/Mirantis/cri-dockerd/backend	[no test files]
?   	github.com/Mirantis/cri-dockerd/cmd	[no test files]
?   	github.com/Mirantis/cri-dockerd/cmd/cri/options	[no test files]
?   	github.com/Mirantis/cri-dockerd/cmd/version	[no test files]
?   	github.com/Mirantis/cri-dockerd/config	[no test files]
?   	github.com/Mirantis/cri-dockerd/containermanager	[no test files]
?   	github.com/Mirantis/cri-dockerd/libdocker/testing	[no test files]
?   	github.com/Mirantis/cri-dockerd/metrics	[no test files]
FAIL	github.com/Mirantis/cri-dockerd/core [build failed]
ok  	github.com/Mirantis/cri-dockerd/libdocker	(cached)
?   	github.com/Mirantis/cri-dockerd/network/cni	[no test files]
?   	github.com/Mirantis/cri-dockerd/network/cni/testing	[no test files]
?   	github.com/Mirantis/cri-dockerd/network/kubenet	[no test files]
?   	github.com/Mirantis/cri-dockerd/network/metrics	[no test files]
ok  	github.com/Mirantis/cri-dockerd/network	(cached)
ok  	github.com/Mirantis/cri-dockerd/network/bandwidth	(cached)
ok  	github.com/Mirantis/cri-dockerd/network/hairpin	(cached)
?   	github.com/Mirantis/cri-dockerd/store	[no test files]
?   	github.com/Mirantis/cri-dockerd/streaming/remotecommand	[no test files]
?   	github.com/Mirantis/cri-dockerd/utils	[no test files]
?   	github.com/Mirantis/cri-dockerd/utils/errors	[no test files]
ok  	github.com/Mirantis/cri-dockerd/network/hostport	1.437s
ok  	github.com/Mirantis/cri-dockerd/network/testing	(cached)
ok  	github.com/Mirantis/cri-dockerd/streaming	(cached)
ok  	github.com/Mirantis/cri-dockerd/streaming/portforward	(cached)
FAIL

If you are able to repro this, we may want to consider pulling out these changes till the broken tests are fixed so that we do not make even more unit tests non-functional. A lot of these tests provide very valuable coverage.

@matthewtorr-msft
Copy link
Contributor

matthewtorr-msft commented Dec 6, 2023

Hi @vikramhh, thanks for raising! As a user, can I ask for a clarification please? Are you concerned about just the tests, or do you suspect this has introduced bugs to cri-dockerd itself?

@vikramhh
Copy link

vikramhh commented Dec 6, 2023

@matthewtorr-msft - I am concerned that if the tests are broken, it would not have been possible to run the tests to validate this PR. To that end, I am more concerned about the process here - it is of course always possible for a bug to be missed even after tests are run.

While I was able to verify the hostport related changes by locally fixing the test and running it, there are other tests that also broke. It is hard to know whether the change introduced bugs without running those tests.

@nwneisen
Copy link
Collaborator Author

nwneisen commented Dec 6, 2023

@vikramhh Thanks for pointing out the unit tests. I've prioritized getting the e2e and integration tests running in the repo again and I've never ran them.

I've already been working getting the tests fixed. So far, none of these test's failures has been for something that wasn't corrected in cri-dockerd itself during this PR. Once I have all of the tests passing, I will add a CI step so that the tests run during every PR.

@vikramhh
Copy link

vikramhh commented Dec 6, 2023

@nwneisen - just to make sure, this is about tests breaking and not test failures.

@nwneisen
Copy link
Collaborator Author

nwneisen commented Dec 6, 2023

@vikramhh - Correct. No tests have failed. They've only been broken due to method arguments changing or similar

@vikramhh
Copy link

vikramhh commented Dec 6, 2023

@nwneisen - tests have broken due to changes in this PR. If tests are broken, they cannot even be run. In that case, it is hard to argue if they have failed or not. The reasons why they are broken are varying - certainly tests broken due to the example I adduced above (different components using different cri-api versions) are very different from method arguments changing. So it is hard to generalize these breaks.

@nwneisen
Copy link
Collaborator Author

nwneisen commented Dec 6, 2023

@vikramhh Tests have been broken long before this PR because they haven't been getting run. I can guarantee that if you fix the one failing test before this PR, you will be snowballed with more failing tests because some of the tests I'm fixing are for changes that went in months ago.

And I can say whether they are failing or not because I've been fixing them and once I fix what made them break, they pass.

@vikramhh
Copy link

vikramhh commented Dec 6, 2023

@nwneisen - that is not correct. Only one test was broken before this PR. Rest all broke after this PR. I mentioned that above. However I could be wrong - please share steps to repro these breaks on this repo before this PR - just like I shared above.

You may find your energies better directed on fixing these though instead of getting into a back and forth with me on whether something is broken and when - I merely happened to review this PR and am just reporting what I observe.

@nwneisen
Copy link
Collaborator Author

nwneisen commented Dec 6, 2023

@vikramhh I agree and thank you for the review. My point in going back in forth on this is that these tests have been broken for months (years?) and they shouldn't be used as a reason to block other changes. They should be fixed but in their current state, they are most likely worthless.

The one failing test is hiding other broken tests. If you fixed the one broken test before this PR, you would see that there are plenty more behind it.

@vikramhh
Copy link

vikramhh commented Dec 6, 2023

@nwneisen - I feel you are still missing the point. The one failing test is no longer a single failing test - multiple tests are failing now and we can go back to the state where a single test was failing by just rolling back this PR. If a PR makes more tests fail, it would be good to at least mention that - or better yet not to add to the number of tests that are failing.

As I mentioned above, the test that was already failing happened due to #147 which was less than a year ago.

While a pre-existing test should not be used to block other changes, it would be good if those "other changes" were tested using unit tests - and furthermore if those "other changes" did not add to the volume of failing tests. I feel it is not too much to expect that of any PR, not just this one.

@nwneisen nwneisen mentioned this pull request Jan 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants