New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
migration: match SELinux level of source pod on target pod #9246
Conversation
717b639
to
5ec3762
Compare
/retest |
/retest-required |
5ec3762
to
897d63c
Compare
A functest was added and this is now ready for reviews! |
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
This does not happen in the RWX Block case, right?
I am just trying to understand why this was never an issue with our CI lanes
running ceph rbd migrations
templatePod.Spec.SecurityContext = &k8sv1.PodSecurityContext{} | ||
} | ||
templatePod.Spec.SecurityContext.SELinuxOptions = &k8sv1.SELinuxOptions{ | ||
Level: seFields[3], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just a suggestion, may be worth to try construct the context and grab the level out of it?
kubevirt/vendor/github.com/opencontainers/selinux/go-selinux/selinux_linux.go
Lines 724 to 739 in 897d63c
func newContext(label string) (Context, error) { | |
c := make(Context) | |
if len(label) != 0 { | |
con := strings.SplitN(label, ":", 4) | |
if len(con) < 3 { | |
return c, InvalidLabel | |
} | |
c["user"] = con[0] | |
c["role"] = con[1] | |
c["type"] = con[2] | |
if len(con) > 3 { | |
c["level"] = con[3] | |
} | |
} | |
return c, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, will do!
This is actually specifically for RWX, but it's needed only for FS CSIs (at least from what I've seen so far). Ceph rbd shouldn't need this. |
Yeah I am just trying to understand why this isn't a problem for ceph rbd, interesting |
Don't quote me on that, but I think dev nodes for the block volumes get their own label inside the pod mount namespace thanks to overlayfs. |
897d63c
to
7bbf98d
Compare
/lgtm |
Hey @jean-edouard , Also how is possible we did hit this only now? Is it possible this is a problem specific to a subset of CSIs? Sorry for the delay I should be more available moving forward. |
With cephfs, I believe so yes. Unless some privileged entity auto-relabels new files with the level
Yes, so far this issue has only been seen with cephfs. It is discussed here: |
a31a240
to
2c7e864
Compare
/retest |
@@ -631,6 +633,32 @@ func (c *MigrationController) createTargetPod(migration *virtv1.VirtualMachineIn | |||
} | |||
} | |||
|
|||
matchLevelOnTarget := c.clusterConfig.GetMigrationConfiguration().MatchSELinuxLevelOnMigration | |||
if matchLevelOnTarget == nil || *matchLevelOnTarget { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this condition correct? If the tunable for this is omitted, it's an opt-in?
Also, just nitpicking, it may be nice to return early if the tunable is false, avoiding the extra indentation
Would probably mean this chunk needs to get extracted to it's own func
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it's true by default, as discussed there: #9246 (comment)
I'll address the second part asap, thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, PTAL!
/hold cancel |
2c7e864
to
bdab680
Compare
/lgtm |
/retest-required |
2 similar comments
/retest-required |
/retest-required |
@jean-edouard - just a heads up. It looks like the SRIOV lane has never passed on this PR so there might be an issue there. Not sure if there is a point in leaving this to just retest. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/hold
// Therefore, it needs to share the same SELinux categories to inherit the same permissions | ||
// Note: there is a small probablility that the target pod will share the same categories as another pod on its node. | ||
// It is a slight security concern, but not as bad as removing categories on all shared objects for the duration of the migration. | ||
if vmiSeContext == "" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems we do "selinuxContext": "none",
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really nice catch, thank you! Should be fixed now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree @brianmcarey
Signed-off-by: Jed Lejosne <jed@redhat.com>
bdab680
to
8c11802
Compare
/lgtm |
@jean-edouard maybe we can introduce helpers for this so we don't get this wrong again in future |
@xpivarc good idea, ok to do in a later PR and unhold this one? Thanks you! |
/hold cancel |
/test pull-kubevirt-e2e-kind-1.27-sriov |
@jean-edouard: The following tests failed, say
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/retest |
What this PR does / why we need it:
When VMIs use RWX disks based on some storage classes, like cephfs, both the source and target pods of a migration need to have access to the files.
There are 2 ways to achieve that:
Both solutions have negative security implications, but they're the only way to deal with shared resources.
I believe this is the best approach, as it doesn't mess with the disk and doesn't expose files to the entire node/cluster for the duration of the migration.
The only downside here is that the target node could (have) create(d) a pod with the same categories.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #
Special notes for your reviewer:
Release note: