Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configuration management #53

Open
bgilbert opened this issue Sep 22, 2018 · 11 comments
Open

Configuration management #53

bgilbert opened this issue Sep 22, 2018 · 11 comments
Labels
area/bootable-containers Related to the bootable containers effort. kind/design

Comments

@bgilbert
Copy link
Contributor

Fedora CoreOS carries forward the Container Linux philosophy of immutable infrastructure: the configuration of an FCOS machine should not change after it is provisioned. When config changes are necessary, they should be done by updating the Ignition config and reprovisioning the host.

In some cases, however, that approach can be rather heavyweight, and configuration management may be a better fit. In addition, some environments have existing CM setups that would be convenient to reuse with FCOS.

Even if FCOS doesn't include enough infrastructure to support running CM tools natively in the host, it may be feasible to run them in a container. If so, we should provide documentation (and maybe even some tooling) to support this.

@bgilbert
Copy link
Contributor Author

Example: #32 (comment)

@cgwalters
Copy link
Member

A general pattern that can work well is to mount the host's writable dirs (/etc, /var) from a container with tools. Say for example your config management wants to manage downloading the kubelet into /opt (really /var/opt on an ostree-based system), you can use whatever you want inside that container (ansible, shell scripts, some custom Ruby).

Another very common use case I see is managing the CA trust roots (on Fedora-derived systems, /etc/pki/ca-trust/source/anchors). You basically need to just drop files into /host/etc, and then take care of invoking update-ca-trust from the host context (one approach is to use systemd-run once you've set up to talk to the host's systemd). Which speaking of, it should work to bind mount in the host's /run/systemd/private socket.

All of this generalizes from small scale to doing it via e.g. a Kubernetes daemonset.

@embik
Copy link

embik commented Sep 22, 2018

Thank you for having this discussion, I very much appreciate it!

The idea to mount certain host directories into containers and configure them in there might be worth exploring for me. However, @cgwalters' post illustrates the issue here in my opinion: You need to mount the correct directories for certain actions. Without proper documentation (and I don't think you can cover everything) it's gonna be trial and error until you have everything in the right place. For example, I wouldn't know which directories kubeadm upgrade on my control pane needs. It might very well be the docker socket, /etc/kubernetes and /var/lib/kubelet, but I simply don't know (yet).

I might toy around with setting up an instance of sshd in a container with Python runtime and publishing it on another port for Ansible access.

In any case, in my humble opinion FCOS should support at least one CM tool in some way as it will reduce the number of self-built solutions.

@ajeddeloh
Copy link
Contributor

On a somewhat tangential subject: We've talked about having a "lightweight" reprovision option. Basically reprovisioning (via Ignition) without talking to the cloud. Given the partitioning and filesystem semantics, it could even be possible to keep /var (or any other filesystem) but wipe and recreate the rest. That would help with but not eliminate the need for more complex configuration management.

@dcode
Copy link

dcode commented Sep 28, 2018

To piggyback on @ajeddeloh I'd like the approach for lightweight provision that at least checks the oem config (i.e. in /usr/share/oem). I manage the RockNSM Project and I'm looking to migrate the network security platform to FCOS.

One of the usecases I have is to expand filesystems in the case where someone runs a virtual sensor and decide they need more space. If the second disk in the hypervisor is doubled, I want to expand it in place.

Secondly, admins may want to reconfigure the available services, which potentially changes the whole dataflow. Currently on CentOS we just run the ansible playbook and we're good. I'd like to make that a systemd service that runs on "firstboot" (or "lightweight reprovision" boot) to ensure the data flow is set up correctly.

I know that's a specific niche case, but I'm really excited about the direction FCOS is going to accomplish an immutable network security sensor.

@bgilbert
Copy link
Contributor Author

bgilbert commented Oct 1, 2018

@dcode

I'd like the approach for lightweight provision that at least checks the oem config (i.e. in /usr/share/oem)

Could you expand on this? We're not planning to have an OEM partition in FCOS.

@dcode
Copy link

dcode commented Oct 1, 2018

Hmm...namely referencing the Notes for Distributors docs for CoreOS using Ignition. And the Ignition docs referencing using the oem:// URL scheme for bare metal install. Namely, my biggest desired use-case is the bare metal. Specifically since bare metal doesn't many of the metadata options, it'd be really useful to have the OEM option that data can be read on first boot.

Namely, the possibility to have a base config that is loaded, in addition to a default config if no user config is specified is most helpful. As I noted, the option to re-run the provision, where perhaps the user config for ignition was updated, allowing to kick off an ansible reprovision on boot (this would be a drop-in service), would allow for an end-to-end lifecycle management of a FCOS system on bare metal.

@bgilbert
Copy link
Contributor Author

bgilbert commented Oct 1, 2018

@dcode Understood. That page is primarily intended for folks customizing Container Linux for a new platform. Overriding the base and default configs works on bare metal, which doesn't ship either one, but won't work on platforms that do ship those configs. On FCOS, we don't know yet whether those configs will be somewhere that's user-configurable; they may be in the initramfs.

There's also /usr/share/oem/config.ign, which is specifically intended for user customizations. It overrides the platform config providers, but is overridden by coreos.config.url. That's a more supported approach for running an Ignition config from disk (though of course the path will change for FCOS), but it doesn't give you the base + default semantics.

@dcode
Copy link

dcode commented Nov 7, 2018

@bgilbert been digging into this more lately, and the /usr/share/oem/config.ign is actually what I was thinking rather than a partition. I assumed oem:// would pull that in, but now I see there's currently a dedicated OEM partition. The various layered configs definitely meet my needs if those are writeable (or can be made writable).

@sghosh151
Copy link

I need to integrate a tool like CyberArk for root credential management on this platform.

@alrf
Copy link

alrf commented Apr 10, 2024

How to manage FCOS on bare-metal machines distributed between different datacenters? In my case e.g. we need to install different tools and packages (e.g. ipset, iptables) and ability to manage configuration files on the fly with Configuration Management Tools like Saltstack. In this regard, FCOS looks like a clumsy and completely non-configurable system - almost every step requires changing yaml-config files and host reprovisioning. Imagine, I have a FCOS cluster and want to deploy an agent for a new security tool - do I need to redeploy the entire cluster for it?

@jlebon jlebon added area/bootable-containers Related to the bootable containers effort. and removed area/container-impact labels Apr 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/bootable-containers Related to the bootable containers effort. kind/design
Projects
None yet
Development

No branches or pull requests

8 participants