Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

applehv: allow virtiofs to mount to / #20612

Merged
merged 1 commit into from
Nov 7, 2023

Conversation

baude
Copy link
Member

@baude baude commented Nov 6, 2023

FCOS has a security limitation where new directories cannot be added to the root / directory of its filesystem. This PR uses the work-around discussed in coreos/rpm-ostree#337 (comment) to temporarily disable the limitation, perform the mkdir, and then re-enable the limitation.

This PR allows mounts on the applehv to actually work.

Does this PR introduce a user-facing change?

virtiofs volume shares are now functional on the `applehv` machine provider

Copy link
Contributor

openshift-ci bot commented Nov 6, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: baude

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 6, 2023
@baude
Copy link
Member Author

baude commented Nov 6, 2023

@rhatdan ptal

Copy link

Ephemeral COPR build failed. @containers/packit-build please check.

FCOS has a security limitation where new directories cannot be added to the root / directory of its filesystem.  This PR uses the work-around discussed in coreos/rpm-ostree#337 (comment) to temporarily disable the limitation, perform the mkdir, and then re-enable the limitation.

This PR allows mounts on the applehv to actually work.

[NO NEW TESTS NEEDED]

Signed-off-by: Brent Baude <bbaude@redhat.com>
@rhatdan
Copy link
Member

rhatdan commented Nov 6, 2023

Copy link

Cockpit tests failed for commit c53693a. @martinpitt, @jelly, @mvollmer please check.

Copy link

Ephemeral COPR build failed. @containers/packit-build please check.

@ashley-cui
Copy link
Member

LGTM

Copy link

Cockpit tests failed for commit d44f71c. @martinpitt, @jelly, @mvollmer please check.

@rhatdan
Copy link
Member

rhatdan commented Nov 7, 2023

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Nov 7, 2023
@openshift-merge-bot openshift-merge-bot bot merged commit f47a85f into containers:main Nov 7, 2023
97 of 100 checks passed
Comment on lines +1142 to +1144
ExecStartPre=chattr -i /
ExecStart=mkdir -p '%f'
ExecStopPost=chattr +i /
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this racy? While doing this with one unit is fine you do it in x units at the same time because systemd will start the units at the same time and systemd start in parallel by default unless explicit dependencies are configured.

Therefore this order is not safe at all. We could end up with something like this:

unit 1 unit 2
chattr -i / chattr -i /
mkdir
chattr +i /
mkdir <-- this will fail
chattr +i /

I think it would be much safer to do it in one unit with one chattr -i / then mkdir all you volumes and then one chattr +i / at the end, this would remove all races.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my understanding is that multiple units do not start fire at the same time in this case. Do you know for certain otherwise? And if so, I'd love to hear a better approach. In this case, I am stuck between what needs to be done and FCOS. If we cannot solve it and FCOS refuses to change, then we will need to drop FCOS for all podman machine operating systems.

@dustymabe do you have anything to add here about whether a race may occur?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I pretty sure systemd start the units in parallel if it can (i.e no After= or similar), of course these commands here all are super quick so I think it would be very hard to actually encounter this in the wild.

As far as fixing I think you can just write the volume paths directly into the unit mkdir $vol1 $vol2 $vol3 instead of using the template.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree this may expose a race condition as systemd could run all the units in parallel. If this did expose a race condition and we knew the paths up front then just doing it in a single unit would certainly prevent the race.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Luap99 is right it's racy

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Feb 6, 2024
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Feb 6, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. release-note
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants