New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a new FIPS test #25362
Add a new FIPS test #25362
Conversation
EDIT: Actually I did test this in FIPS mode, the local libvirt cluster I had was using |
Hmm indeed something is going wrong in fips; in this e2e-aws-fips run the install config has It is working for me in a local run. But this would be much nicer to debug if we weren't dealing with the shell test failure. (That's openshift/release#10488 - the problem isn't obvious to me and to be honest felt easier to rewrite than to debug and fix; bash is easy to write, but fragile in failure cases and hard to debug/maintain) |
The problem with this test is that you need to know that FIPS is on. The point of the other check is a) to check that the cluster ack'd that FIPS was on, and then b) to run a test suite that verifies normal function. Since FIPS is not always on, this test doesn't replace the check in the ci code (which inits FIPS and then verifies a user observable side effect, then runs one or more suites). We don't want to have a FIPS version of each suite, so we would probably need to have a "FIPS suite" that has 1-N tests that check FIPS explicitly, then in CI testing run two suites. However, if you do that, you have to either backport that to all previous releases, or make the version check in the CI code do one of two things (if the suite is available from openshift-tests, run that, otherwise run the script for legacy behavior). It's probably easier to just fix the CI check for the new code to start with so that we can get back to green. |
By the nature of Ignition, we enable FIPS from the very first thing. Our whole architecture is designed around this "system is in desired state" by the time we get to the real root. Specifically for FIPS in RHCOS we actually implement an extra reboot if we detect FIPS to enter that state before we do anything else. It's not like Ansible where there would be some portion of a playbook run after ssh was already up and the cluster might be running. Hence I don't think "check FIPS, then run test suite" is necessary. |
We have that data in the install config. Now I could imagine something where we truly triple check by having the tests here more explicitly support the (It'd be a triple check because remember FIPS is also already today validated in the MCO) But overall this test is small and straightforward, extremely low cost - any reason not to merge? (That said it clearly would be good to get e2e-aws-fips passing reliably again before merging) |
I was surprised we weren't logging anything related to this when I went to double-check the node state while working on openshift/origin#25362
OK now that FIPS is green I'd like to get this in. |
/retest |
|
||
g.It("TestFIPS", func() { | ||
clusterAdminKubeClientset := oc.AdminKubeClient() | ||
installConfig, err := installConfigFromCluster(clusterAdminKubeClientset.CoreV1()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't read the install config to get FIPS enabled/disabled.
Use the machine config pool to figure out if FIPS is enabled using the rendered machine config and then use the target label from machine config pool to create a list of nodes to validate if the expected matched.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The thing is - that's what the MCO is already doing today. See previous discussion in openshift/release#10488
The intent of this test is to be a "triple check": if say there was a bug in the MCO that failed to get fips into the rendered machineconfig, this test would catch it.
eb5fb0f
to
7a18909
Compare
Rebased 🏄 I still think this is a good idea, and this also contains some code I'd like to use in other tests. |
46de887
to
be9c0cf
Compare
Trying to run |
See discussion in openshift/release#10488 and https://bugzilla.redhat.com/show_bug.cgi?id=1861095 This test replaces the bash code in the release repo with a more proper test here. While here I noticed that the topology tests had some code that reused the MCD as a handy privileged pod; extract that to the toplevel utils and use both here and there.
OK @smarterclayton now that 4.7 is open I'd like to get this in, any other comments? |
/lgtm |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: abhinavdahiya, cgwalters, knobunc The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest Please review the full test history for this PR and help us cut down flakes. |
/unassign ironcladlou |
/retest Please review the full test history for this PR and help us cut down flakes. |
1 similar comment
/retest Please review the full test history for this PR and help us cut down flakes. |
See discussion in
openshift/release#10488
and https://bugzilla.redhat.com/show_bug.cgi?id=1861095
This test replaces the bash code in the release repo
with a more proper test here.
While here I noticed that the topology tests had some code
that reused the MCD as a handy privileged pod; extract
that to the toplevel utils and use both here and there.