Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support installing extensions shipped by RHCOS #1850

Closed
wants to merge 4 commits into from

Conversation

sinnykumari
Copy link
Contributor

@sinnykumari sinnykumari commented Jun 19, 2020

enhancement doc- openshift/enhancements#317

  • Save old and new MachineConfig into json file and process later on host
  • Rework the existing PullAndRebase() to run extensions in host context
  • Add m-c-d subcommand mount-container which we run in
    host context. Here we pull OSContainer, create a container and mount it.
    We also save the created container name and mount location in /run
    which we will be used later by MCD to rebase the OS and apply extensions
  • Install kernel-rt packages from coreos-extensions repo we created earlier.
    This saves us from searching explicitly for kernel-rt packages in the OSContainer.
    Also simplifies updating kernel-rt packages

@openshift-ci-robot openshift-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jun 19, 2020
@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jun 19, 2020
@kikisdeliveryservice
Copy link
Contributor

/skip e2e-ovn-step-registry

@sinnykumari sinnykumari force-pushed the rhcos-extension branch 3 times, most recently from b164617 to 0ceda75 Compare June 23, 2020 11:36
Copy link
Contributor

@kikisdeliveryservice kikisdeliveryservice left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just a few minor comments. love that this is super clear and easy to follow ❤️

pkg/daemon/update.go Outdated Show resolved Hide resolved
pkg/daemon/update.go Outdated Show resolved Hide resolved
pkg/daemon/update.go Outdated Show resolved Hide resolved
pkg/daemon/update.go Outdated Show resolved Hide resolved
@kikisdeliveryservice kikisdeliveryservice changed the title WIP: Support installing exetnsions shipped by RHCOS WIP: Support installing extensions shipped by RHCOS Jun 23, 2020
@sinnykumari sinnykumari force-pushed the rhcos-extension branch 2 times, most recently from 0fd93b7 to 3693dc8 Compare June 24, 2020 11:23
@sinnykumari
Copy link
Contributor Author

To install extensions on RHCOS host, I am pulling mounting OSContainer and then creating a repo in /etc/yum.repos.d/ which points to mounted OSContainer path (https://github.com/openshift/machine-config-operator/pull/1850/files#diff-06961b075f1753956d802ba954d2cfb5R820) . And then run rpm-ostree update --install <pkg> but it gives error:

Checking out tree ee22f82... done
Enabled rpm-md repositories: rhcos-extensions
Updating metadata for 'rhcos-extensions'... done
error: Updating rpm-md repo 'rhcos-extensions': /var/lib/containers/storage/overlay/<sha>/merged/extensions/ was not found

Looking at audit message on the applied host, I don't see any selinux denial messagegrep -R "denied" /var/log/audit/audit.log , perhaps selinux is not playing any role here.

Do we need to implement something like coreos/rpm-ostree#1732 for rpm-ostree install/update/remove as well or there is any alternative way?

Hacky option I see is copy the extensions/ from mounted container into somewhere (like /var/tmp/) and delete after applying extensions.

@cgwalters @jlebon thoughts?

@cgwalters
Copy link
Member

OK I think the answer here is that currently for the ostree repo, what we're doing is opening a file descriptor and sending it to the daemon over DBus. That will work regardless of any mount namespaces in effect.

When we invoked podman mount here it only affects the mount namespace of the MCD itself.

Possible fixes:

  • Add runtime-only rpm-md repos (i.e. /run/yum.repos.d) and support referencing them by file descriptor (the best fix but also the hardest)
  • Expose the repos via a temporary local webserver (kind of eww but would work short term)
  • Invoke podman mount in the namespace of the host - and actually if we want to unroll this farther, once we have Use MCD binary from container in /run/bin #1766 we can just run most of this code in the context of the host instead of the MCD pod

@jlebon
Copy link
Member

jlebon commented Jun 25, 2020

When we invoked podman mount here it only affects the mount namespace of the MCD itself.

Hmm, I'm confused. Doesn't the rpm-ostree rebase already run directly on the host via machine-config-daemon-host.service? (And that same service also does the podman create and podman mount dance.)

@sinnykumari
Copy link
Contributor Author

OK I think the answer here is that currently for the ostree repo, what we're doing is opening a file descriptor and sending it to the daemon over DBus. That will work regardless of any mount namespaces in effect.

When we invoked podman mount here it only affects the mount namespace of the MCD itself.

Possible fixes:

* Add runtime-only rpm-md repos (i.e. `/run/yum.repos.d`) and support referencing them by file descriptor (the best fix but also the hardest)

* Expose the repos via a temporary local webserver (kind of eww but would work short term)

* Invoke `podman mount` in the namespace of the host - and actually if we want to unroll this farther, once we have #1766 we can just run most of this code in the context of the host instead of the MCD pod

3rd option sound promising to me. I have limited knowledge, so I am going to ask some silly questions -

  1. podman mount documentation says it mounts the specified containers' root file system in a location which can be accessed from the host, and returns its location . rpm-ostree binary which we will run is part of host and mounted container is accessible to host so shouldn't this already work?
  2. For my info, how do we specify namespace during podman mount? Don't see anything in podman mount man page related to namespace.

@cgwalters
Copy link
Member

cgwalters commented Jun 25, 2020

Hmm, I'm confused. Doesn't the rpm-ostree rebase already run directly on the host via machine-config-daemon-host.service? (And that same service also does the podman create and podman mount dance.)

We have two copies of that code now, one which runs on the host and one which runs in the MCD that was added for kernel-rt.

I think we need to deduplicate, and we can do so conveniently after #1766 lands!

@jlebon
Copy link
Member

jlebon commented Jun 25, 2020

We have two copies of that code now, one which runs on the host and one which runs in the MCD that was added for kernel-rt.

Ahh gotcha, I see it now. I was just grepping for rebase and following the trail from there.

Re. possible fixes suggested in #1850 (comment), my vote personally is for (2) short-term until (3).

pkg/daemon/update.go Outdated Show resolved Hide resolved
pkg/daemon/update.go Outdated Show resolved Hide resolved
@sinnykumari
Copy link
Contributor Author

* Invoke `podman mount` in the namespace of the host - and actually if we want to unroll this farther, once we have #1766 we can just run most of this code in the context of the host instead of the MCD pod

Since #1766 is already in, I am going to give more thoughts and rework extensions feature + existing kernel-rt work to run in the context of host

@cgwalters
Copy link
Member

Since #1766 is already in, I am going to give more thoughts and rework extensions feature + existing kernel-rt work to run in the context of host

Yeah I think having the kernel-rt logic run consistently on the host is really a pre-requisite for this. I think actually today what we can do is serialize the MachineConfig object into JSON and pass it to ourself on the host and do the upgrade/kernel-rt/extensions stuff all there.

Hmm right I just realized the reason the kernel-rt code is working today is because the rpm-ostree client call opens file descriptors for each passed RPM and sends them to the daemon, but we can't do that with rpm-md repositories today.

Once we have https://bugzilla.redhat.com/show_bug.cgi?id=1839065 (which might even happen in 8.2.z) I think we can drop all of the logic for writing our binary to the host in the MCD case and just run everything from the MCD which would obviously be a huge cleanup.

@sinnykumari
Copy link
Contributor Author

Since #1766 is already in, I am going to give more thoughts and rework extensions feature + existing kernel-rt work to run in the context of host

Yeah I think having the kernel-rt logic run consistently on the host is really a pre-requisite for this. I think actually today what we can do is serialize the MachineConfig object into JSON and pass it to ourself on the host and do the upgrade/kernel-rt/extensions stuff all there.

+1

I was also thinking about early validation of extensions args i.e. when we render the applied MachineConfig. I believe this would require another podman mount of OSContainer to look into available extensions :/

Hmm right I just realized the reason the kernel-rt code is working today is because the rpm-ostree client call opens file descriptors for each passed RPM and sends them to the daemon, but we can't do that with rpm-md repositories today.

right

Once we have https://bugzilla.redhat.com/show_bug.cgi?id=1839065 (which might even happen in 8.2.z) I think we can drop all of the logic for writing our binary to the host in the MCD case and just run everything from the MCD which would obviously be a huge cleanup.

that will be nice. If I understand correctly we will still need to pull m-c-d binary during firstboot, right?

@cgwalters
Copy link
Member

I was also thinking about early validation of extensions args i.e. when we render the applied MachineConfig. I believe this would require another podman mount of OSContainer to look into available extensions :/

Oh yeah, that's a good point. Hmm. Probably the MCC container could pull the oscontainer once when it starts up and then and cache a mapping of oscontainer url => allowed extensions, so we're not re-pulling the container each time we validate.

Or I guess we could just maintain a static list of allowed extensions in the MCO too. There shouldn't be too many to start.

If I understand correctly we will still need to pull m-c-d binary during firstboot, right?

Yeah, because we don't support really ancient kubelets joining the cluster, we don't want any workloads to land before OS updates etc.

@miabbott
Copy link
Member

Oh yeah, that's a good point. Hmm. Probably the MCC container could pull the oscontainer once when it starts up and then and cache a mapping of oscontainer url => allowed extensions, so we're not re-pulling the container each time we validate.

Maybe we should be including a JSON file or the like in machine-os-content that provides the list of supported extensions? Since we are responsible for sticking the right RPMs and deps into the container, seems like we are in the best position to enumerate the supported extensions per oscontainer/release.

Could even go as far as providing a supported and unsupported list, if we were to ever decide to drop support for an extension.

@sinnykumari
Copy link
Contributor Author

Oh yeah, that's a good point. Hmm. Probably the MCC container could pull the oscontainer once when it starts up and then and cache a mapping of oscontainer url => allowed extensions, so we're not re-pulling the container each time we validate.

Maybe we should be including a JSON file or the like in machine-os-content that provides the list of supported extensions? Since we are responsible for sticking the right RPMs and deps into the container, seems like we are in the best position to enumerate the supported extensions per oscontainer/release.

Could even go as far as providing a supported and unsupported list, if we were to ever decide to drop support for an extension.

agree, if we want to extend this to supported/unsupported list then machine-os-content seems to be the right place to provide that detail.

For now I think it should be fine for MCO to consume extensions as it is because we ship only those extra packages in machine-os-content which we want users to install. #1766 makes things really flexible to update MCD behavior whenever neededd

@cgwalters
Copy link
Member

The new v2 layout https://gitlab.cee.redhat.com/coreos/redhat-coreos/-/merge_requests/952 aims to have "enumerate extensions" simply use the filesystem. Basically:

  • List all sub-directories of extensions/
  • Skip dependencies/
  • If the directory has any .rpm files in it, it's an extension

Not opposed to a JSON list, but keep in mind the rpm-md repodata is already a "list of packages" (just in XML form). We'd be inventing "rpm metadata in JSON form"?

Anything we change inside the container though isn't solving the issue that validating a MachineConfig fragment requires fetching and unpacking the container.

@miabbott
Copy link
Member

* If the directory has any `.rpm` files in it, it's an extension

I missed this association when that MR went by. That kind of clear relationship is what I was thinking about when I suggested the JSON file approach.

@jlebon
Copy link
Member

jlebon commented Jun 29, 2020

Not opposed to a JSON list, but keep in mind the rpm-md repodata is already a "list of packages" (just in XML form). We'd be inventing "rpm metadata in JSON form"?

I think this is fine to start, though IMO there's definitely an argument for having the flexibility that some metadata layer affords us. For example, if we want extension "foobar" to actually result in the installation of two related, but not necessarily dependent packages. Or if we want to change how extension "foobar" is implemented at the RPM level across an update.

I think actually this might help with the Recommends issue as well. If we solve that and turn it off by default both server-side and client-side, then we'll need a way to pull in recommended packages which no longer get pulled in when it makes sense (and without globally turning on weak deps again client-side).

@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: sinnykumari

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@miabbott
Copy link
Member

miabbott commented Jul 14, 2020

I wanted to revive the discussion about providing some kind of metadata about available extensions in machine-os-content

In a real-time meeting with @zvonkok @mrunalp @darkmuggle (re: openshift/enhancements#357) we discussed adding additional annotations to the machine-os-content about the kernel version (and possibly the glibc version) used by RHCOS. This would help the users doing out-of-tree module builds by providing the information in the metadata, rather than having to pull down the entire container image.

It was brought up that we could go a step further and also enumerate the available extensions in an annotation on the machine-os-content. This could provide similar benefits to allowing users (or tooling) to inspect the metadata of machine-os-content to determine which extensions are supported/available in a particular image.

@sinnykumari
Copy link
Contributor Author

I wanted to revive the discussion about providing some kind of metadata about available extensions in machine-os-content

This sounds like it needs more thoughts and discussion. To avoid getting this lost in this lengthy PR update, I think creating a new issue with relevant information would be nice.

@sinnykumari
Copy link
Contributor Author

/retest

@miabbott
Copy link
Member

This sounds like it needs more thoughts and discussion. To avoid getting this lost in this lengthy PR update, I think creating a new issue with relevant information would be nice.

Moved to openshift/os#409

@sinnykumari
Copy link
Contributor Author

In the latest PR, I have moved kernel-rt and extensions processing into host context. It installs needed packages from coreos-extensions repo which we again mount in the host context so that it is accessible to rpm-ostree.

It seems rpm-ostree still has some issues accessing the extensions repo path which points to already mounted OSContainer. Journal log of rpm-ostreed service while installing usbguard extensions at cluster install time -

$ journalctl -u rpm-ostreed.service
-- Logs begin at Thu 2020-07-16 03:13:57 UTC, end at Thu 2020-07-16 07:21:34 UTC. --
Jul 16 03:22:48 ip-10-0-163-33 systemd[1]: Starting rpm-ostree System Management Daemon...
Jul 16 03:22:48 ip-10-0-163-33 rpm-ostree[1967]: Reading config file '/etc/rpm-ostreed.conf'
Jul 16 03:22:49 ip-10-0-163-33 rpm-ostree[1967]: In idle state; will auto-exit in 63 seconds
Jul 16 03:22:49 ip-10-0-163-33 systemd[1]: Started rpm-ostree System Management Daemon.
Jul 16 03:22:49 ip-10-0-163-33 rpm-ostree[1967]: client(id:cli dbus:1.14 unit:machine-config-daemon-firstboot.service uid:0) added; new total=1
Jul 16 03:22:49 ip-10-0-163-33 rpm-ostree[1967]: client(id:cli dbus:1.14 unit:machine-config-daemon-firstboot.service uid:0) vanished; remaining=0
Jul 16 03:22:49 ip-10-0-163-33 rpm-ostree[1967]: In idle state; will auto-exit in 62 seconds
Jul 16 03:23:21 ip-10-0-163-33 rpm-ostree[1967]: client(id:cli dbus:1.19 unit:machine-config-daemon-firstboot.service uid:0) added; new total=1
Jul 16 03:23:21 ip-10-0-163-33 rpm-ostree[1967]: Initiated txn UpdateDeployment for client(id:cli dbus:1.19 unit:machine-config-daemon-firstboot.service uid:0): /org/projectatomic/rpmostree1/rhcos
Jul 16 03:23:41 ip-10-0-163-33 rpm-ostree[1967]: Librepo version: 1.11.0 with CURL_GLOBAL_ACK_EINTR support (libcurl/7.61.1 OpenSSL/1.1.1c zlib/1.2.11 brotli/1.0.6 libidn2/2.2.0 libpsl/0.20.2 (+libidn2/2.0.5) libssh/0.9.0/openssl/zlib nghttp2/1.33.0)
----> Jul 16 03:23:42 ip-10-0-163-33 rpm-ostree[1967]: Txn UpdateDeployment on /org/projectatomic/rpmostree1/rhcos failed: Updating rpm-md repo 'coreos-extensions': /var/lib/containers/storage/overlay/d73cc16e4b5906d0ce2bc67544cc744370bee8c9300d2b172245b985476644fe/merged/extensions/ was not found
Jul 16 03:23:42 ip-10-0-163-33 rpm-ostree[1967]: client(id:cli dbus:1.19 unit:machine-config-daemon-firstboot.service uid:0) vanished; remaining=0
Jul 16 03:23:42 ip-10-0-163-33 rpm-ostree[1967]: In idle state; will auto-exit in 63 seconds
Jul 16 03:24:45 ip-10-0-163-33 rpm-ostree[1967]: In idle state; will auto-exit in 62 seconds

Above issue has been seen while using latest installer where bootimage contains OSTree version 46.82.202007152240-0, rpm-ostree-2020.2-2.el8.x86_64, ostree-2020.3-3.el8.x86_64)

Before moving to latest installer, I have been using older installer where bootimage contains OSTree version 45.81.202005200134-0, rpm-ostree-2019.6-8.el8.x86_64, ostree-2019.6-2.el8.x86_64 . With this OS update, extensions apply and kernel-rt switch works fine.

Don't see any selinux denial in audit log. @cgwalters @jlebon Can it be related to some recent rpm-ostree update?

- Save old and new MachineConfig into json file and process later on host
- Rework the existing PullAndRebase() to run extensions in host context
- Add m-c-d subcommand mount-container which we run in
  host context. Here we pull OSContainer, create a container and mount it.
  We also save the created container name and mount location in /run
  which we will be used later by MCD to rebase the OS and apply extensions
Install kernel-rt packages from coreos-extensions repo we
created earlier. This saves us from searching explicitly
for kernel-rt packages in the OSContainer.
Also simplifies updating kernel-rt packages
@sinnykumari sinnykumari changed the title WIP: Support installing extensions shipped by RHCOS Support installing extensions shipped by RHCOS Jul 16, 2020
@openshift-ci-robot openshift-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jul 16, 2020
@sinnykumari
Copy link
Contributor Author

Had a chat with Colin and Jonathan about recent issue #1850 (comment).

Issue is happening because:

  • Mounted OSContainer container is private
| `-/var/lib/containers/storage/overlay                                                                           private
|   `-/var/lib/containers/storage/overlay/0fe3227f6f6f10dae3a43ae0ba951c8d94097d3fb2f2ee3d21e396d136fe3d1b/merged private
  • rpm-ostreed isn't seeing it because it's running in its own mount namespace due to MountFlags=slave which was introduced recently coreos/rpm-ostree@75c6767

Solution:

  1. It should work fine if rpm-ostreed.service is started after the mount. We can stop rpm-ostreed service, mount container and then start rpm-ostreed.
  2. Extract the container content using oc image extract to fetch container content and perform OS update and extensions from there. Related https://hackmd.io/WeqiDWMAQP2sNtuPRul9QA

We will explore option 2 first and if it doesn't work we will go for option 1

Until we fix this
/hold

@openshift-ci-robot openshift-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jul 16, 2020
@sinnykumari
Copy link
Contributor Author

While going down the road of oc extract image, it was considerable deviation to this PR. Have reworked in a fresh PR #1941 which should supersede this one.

We can close this PR later on.

@openshift-ci-robot
Copy link
Contributor

@sinnykumari: The following tests failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/prow/e2e-ovn-step-registry f133530 link /test e2e-ovn-step-registry
ci/prow/e2e-metal-ipi f133530 link /test e2e-metal-ipi
ci/prow/e2e-aws f133530 link /test e2e-aws
ci/prow/e2e-gcp-upgrade f133530 link /test e2e-gcp-upgrade
ci/prow/e2e-gcp-op f133530 link /test e2e-gcp-op
ci/prow/e2e-aws-proxy f133530 link /test e2e-aws-proxy

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

Comment on lines +238 to +241
var containerName string
if containerName, err = daemon.ReadFromFile(constants.MountedOSContainerName); err != nil {
return err
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I may be missing the point of this re-declaration of containerName.

Don't we want it to be available below, outside of the condition?

containerName, err = daemon.ReadFromFile(constants.MountedOSContainerName)
if err != nil {
    return err
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

meh.. I just realized there's a new PR 😄

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will close this once we merge the other one.

sinnykumari added a commit to sinnykumari/machine-config-operator that referenced this pull request Jul 31, 2020
Earlier to perform OS update we were pulling OS image
and mounting the content using podman. We were performing
OS update in host context because of selinux constraints
on mounted container.

Also, for rt-kernel switch we were pulling OS image again.
With coreos extensions support, we require extensions rpm
which is available in os container.

We tried different approach to solve problems
like minimizing container image pull, host/container
context switch, selinux permission on mounted
container and rpm-ostreed behavior.
See openshift#1850

Finally, using `oc image extract` and skopeo to inspect container
image solves our problems. With this we are also getting
rid of using mco-pivot systemd service to rebase OS.
@sinnykumari
Copy link
Contributor Author

Alternative implementation of extensions in #1941 has been merged, closing this PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants