Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

main: Change /sys/fs/selinux handling to be a hard error #2960

Merged
merged 1 commit into from
Jul 6, 2022

Conversation

cgwalters
Copy link
Member

@cgwalters cgwalters commented Jul 5, 2022

Prep for reworking this in Go, where re-exec'ing ourself is
a bit more hairy.

Also I think this code should really, finally be dead. We shouldn't
be seeing the host selinuxfs mount in any container setup.


@dustymabe
Copy link
Member

The original bug was closed but I don't think any code work was ever done to fix it: containers/podman#1448

Though a quick test doesn't seem to show it:

[root@cosa-devsh ~]# sudo podman run -it --net=host --privileged registry.fedoraproject.org/fedora:36
[root@cosa-devsh /]# ls /sys/fs/selinux/
[root@cosa-devsh /]# exit
exit

sudo mount --bind /usr/share/empty /sys/fs/selinux
fi
echo "warning: /sys/fs/selinux appears to be mounted but should not be" 1>&2
sleep 2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since this was previously something we worked around and we don't expect to see it now maybe let's hard error instead?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think you're right. A warn + sleep would still be too likely to be missed in automated builds.

I did verify that a cosa shell run on internal pipeline has an empty /sys/fs/selinux.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IOW, fixed thanks!

Prep for reworking this in Go, where re-exec'ing ourself is
a bit more hairy.

Also I think this code should really, finally be dead.  We shouldn't
be seeing the host selinuxfs mount in any container setup.
@cgwalters cgwalters force-pushed the drop-sys-fs-selinux-workaround branch from e755495 to 8a59943 Compare July 6, 2022 15:38
@cgwalters cgwalters changed the title main: Change /sys/fs/selinux handling to be a warn + sleep main: Change /sys/fs/selinux handling to be a hard error Jul 6, 2022
Copy link
Member

@dustymabe dustymabe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@dustymabe dustymabe enabled auto-merge (rebase) July 6, 2022 16:12
@dustymabe dustymabe merged commit 46d5bc9 into coreos:main Jul 6, 2022
@dustymabe
Copy link
Member

Looks like there is at least one case where the assumption breaks. Somehow the gangplank stuff seems to have it. From bump-lockfile#309:

[2022-07-07T02:31:47.109Z] INFO[0000] Running as a worker pod                      
[2022-07-07T02:31:47.363Z] INFO[0000] Worker is out-of-clstuer, no secrets will be available 
[2022-07-07T02:31:47.363Z] INFO[0000] Worker is part of buildconfig.openshift.io/cosa-70a48881-92a7-4e6b-80f8-2005d84c793a 
[2022-07-07T02:31:47.363Z] INFO[0000] running 'cosa init --force --branch testing-devel https://github.com/coreos/fedora-coreos-config' 
[2022-07-07T02:31:47.363Z] error: /sys/fs/selinux appears to be mounted but should not be
[2022-07-07T02:31:47.363Z] ERRO[0000] Failed to checkout respository                cmd="[cosa init --force --branch testing-devel https://github.com/coreos/fedora-coreos-config]" error="exit status 1" out=

@cgwalters
Copy link
Member Author

Urgh, OK. Revert up at #2966

It seemingly failed only on s390x? Or maybe that's just because the builder builds cosa first and got the change where x86_64 didn't?

@dustymabe
Copy link
Member

It seemingly failed only on s390x? Or maybe that's just because the builder builds cosa first and got the change where x86_64 didn't?

I rebuilt the s390x builder yesterday so it had done a new build of COSA. The aarch64 builder still had COSA from the previous night's build.

I just did another bump lockfile run and now we can see that both s390x and aarch64 fail (both use gangplank): https://jenkins-fedora-coreos-pipeline.apps.ocp.fedoraproject.org/blue/organizations/jenkins/bump-lockfile/detail/bump-lockfile/311/pipeline

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants