New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No interface to juju-reboot
hook tool in ops
#929
Comments
It's a reasonable idea -- thanks. We're thinking of doing this as a top-level function, say It could be defined in Thoughts? |
You bring up a good point. I was in pylib-juju mode where you need to "pull down" the unit to run a command on it 😅 I think something like Perhaps |
Thinking about this further, |
FWIW, here's a use of We could potentially submit a PR to that charm after this work is in a released ops version to update it to use |
Adds a new `reboot` method to `Unit`, that's a wrapper around the `juju-reboot` command. `juju-reboot` is only available for machine charms. When it's run for a K8s charm, Juju logs an error message but doesn't return an response with the error (or have a non-zero exit), which means that we have to fail silently. Adds a corresponding `reboot()` method to the harness backend. From the point of view of the harness, we assume that everything is essentially the same after rebooting without `--now`, so this is a no-op. The most likely change that would impact the harness would be a leadership change, but it seems better to let the Charmer model that explicitly in their tests. When rebooting with `--now`, what should happen is that everything after that point in the event handler is skipped, there's a simulated reboot (again, roughly a no-op), and then the event is re-emitted. I went down a long rabbit-hole trying to get the harness to do this, but it ended up way too messy for minimal benefit. Instead, a `SystemExit` exception is raised and the Charmer is expected to handle it in their tests, for example: ```python def test_installation(self): self.harness.begin() with self.assertRaises(SystemExit): self.harness.charm.on.install.emit() # More asserts here that the first part of installation was done. self.harness.charm.on.install.emit() # More asserts here that the installation continued appropriately. ``` Fixes #929
Hello there! I am currently working through a predicament in the SLURM charms where the machines need to be rebooted after package installation to apply security updates. Originally, the SLURM charms were deployed manually so the human operator would issue a machine reboot command based on the substrate being used (e.g.
openstack server reboot <machine_id>
). We are now working to make deploying SLURM an automatic process, however, we still have the issue where the machines need to be rebooted to apply updates.My idea was to use the
juju-reboot
hook tool located in/var/lib/juju/tools/machine-<#>/
, however, it does not seem that ops has a direct method for invokingjuju-reboot
. It would be nice if ops had a defined method for scheduling machine reboots at the end of hook execution:Currently I can circumvent my reboot issue by directly invoking the
juju-reboot
hook tool, but I thought that this might be a nice edition to ops. Let me know what your thoughts are!The text was updated successfully, but these errors were encountered: