Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backup/restore of master #661

Closed
andyjeffries opened this issue Jul 22, 2019 · 7 comments
Closed

Backup/restore of master #661

andyjeffries opened this issue Jul 22, 2019 · 7 comments

Comments

@andyjeffries
Copy link

Considering how K3s is really currently positioned as having a single master, we thought it would be nice if there was a way of backing up and restoring the K3s master content (so we could wipe the master's virtual machine, rebuild it, re-install with a backup file given to use it during installation).

Describe the solution you'd like
An option to k3s to perform a backup, an ENV option during the installer to use an existing backup.

Describe alternatives you've considered
A simple bash script that copies the related files, but it wouldn't feel as nice.

Additional context
We're happy to have a whack at building this and submitting PRs, just wanted to check first if there was any interest in it?

@andyjeffries andyjeffries changed the title Backup/restore Backup/restore of master Jul 22, 2019
@dewet22
Copy link

dewet22 commented Jul 28, 2019

HA just got its first love with v0.7.0 and presumably will keep on maturing, so not sure how useful this is going forward?

That said, k3s master state is all under /var/lib/rancher/k3s/server, and my current strategy is snapshotting that regularly for the time being. In fairness I haven't tested the restore part yet :D

@dheeraj29
Copy link

Why not try

https://github.com/heptio/velero

@andyjeffries
Copy link
Author

I'm thinking more along the lines of as (below 0.7.0) k3s was a generally single master solution and we're offering a public managed K3s service, having the ability to backup/restore the master would be great. With it moving towards a full HA system, this may be less necessary. But I wasn't thinking of backing up and restoring the cluster contents themselves.

@stale
Copy link

stale bot commented Jul 31, 2021

This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.

@stale stale bot added the status/stale label Jul 31, 2021
@unixfox
Copy link

unixfox commented Jul 31, 2021

Bump still important feature.

@stale stale bot removed the status/stale label Jul 31, 2021
@maxirus
Copy link

maxirus commented Nov 3, 2021

I see the callout in the Rancher Backup/Restore Docs:

Currently, backups of SQLite are not supported. Instead, make a copy of /var/lib/rancher/k3s/server and then delete K3s.

But it doesn't go into much more detail than that. Will following steps work?

  1. Make note of K3s Server version
  2. Stop master/server
  3. zip folder /var/lib/rancher/k3s/server
  4. bring new host up
  5. Restore zip file
  6. Install same version of k3s-server

@stale
Copy link

stale bot commented May 3, 2022

This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.

@stale stale bot added the status/stale label May 3, 2022
@stale stale bot closed this as completed May 17, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants