New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
find a place for backups #1847
Comments
asked Hetzner, but they declined to sponsor us. |
current stateGiven we've been moving things around and a lot is hosted at Conova right now, we should prioritize this topic, as having backups in the same location is suboptimal. Right now, backups are done to ¹: proposalI propose we setup a new machine, give it "enough" storage (300G for starters?), configure it as a backup receiver and mirror the current backup set there. Then flip over the service alias and clean up We have sufficient storage at OSUOSL (470G) and Netways (800G) free. Given Conova is in Europe, I'd prefer Netways due to network locality. We can then, as a second step mirror out alternativeWe could also leave the primary backup at Conova and use mirroring to Netways (and OSUOSL?), but that would mean that if Conova goes down, we gotta restore from a mirror which might lag (not more than a day, but still). worth to mentionThe current backup happens as individual users for each source, so that they can't access "foreign" backups (they could not decrypt those anyways, lacking keys, but they could overwrite them). That means when mirroring, we need to either mirror the ownership correctly or fix ownership before restoring from a mirror. Mirroring the ownership requires elevated privileges (CAP_FOWNER, CAP_CHOWN and friends, root has it), which I'd prefer not to give to the mirroring process as I don't know how to limit that to |
👍 to the proposal |
👍 to the proposal. For the permissions: I'd say that we don't bother with it. Restores are rare and if we clearly document it in a restore procedure it's easy to fix. |
|
it has been flipped and verified on the key that was used to rsync the backups was deleted from backup01. if nothing wild happens today, I'll go and clean up the storage of puppet01 tomorrow. |
cleanup in #1930 |
#1930 was merged, the data is gone from puppet01 I still need to cleanup the underlying storage, so keeping this open until then |
[root@puppet01 ~]# pvdisplay -m
--- Physical volume ---
PV Name /dev/vda2
VG Name cs_puppet01
PV Size <149.00 GiB / not usable 1.98 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 38143
Free PE 33280
Allocated PE 4863
PV UUID NCxNZv-t7xS-n37q-24Uu-Q4Rj-grsf-p2LNUQ
--- Physical Segments ---
Physical extent 0 to 511:
Logical volume /dev/cs_puppet01/swap
Logical extents 0 to 511
Physical extent 512 to 4862:
Logical volume /dev/cs_puppet01/root
Logical extents 0 to 4350
Physical extent 4863 to 38142:
FREE
[root@puppet01 ~]# pvresize --setphysicalvolumesize 20G /dev/vda2
/dev/vda2: Requested size 20.00 GiB is less than real size <149.00 GiB. Proceed? [y/n]: y
WARNING: /dev/vda2: Pretending size is 41943040 not 312473567 sectors.
Physical volume "/dev/vda2" changed
1 physical volume(s) resized or updated / 0 physical volume(s) not resized
[root@puppet01 ~]# pvdisplay -m
--- Physical volume ---
PV Name /dev/vda2
VG Name cs_puppet01
PV Size <20.00 GiB / not usable 3.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 5119
Free PE 256
Allocated PE 4863
PV UUID NCxNZv-t7xS-n37q-24Uu-Q4Rj-grsf-p2LNUQ
--- Physical Segments ---
Physical extent 0 to 511:
Logical volume /dev/cs_puppet01/swap
Logical extents 0 to 511
Physical extent 512 to 4862:
Logical volume /dev/cs_puppet01/root
Logical extents 0 to 4350
Physical extent 4863 to 5118:
FREE
[root@puppet01 ~]# cfdisk /dev/vda
<resize /dev/vda2 to 20G and ensure it also calculates 41943040 sectors> at this point, only the first 21G of the 150G disk are used and the 129G "tail" can be removed. let's play safe and resize it to 25G tho. [root@virt01 ~]# virsh shutdown puppet01
Domain 'puppet01' is being shutdown
[root@virt01 ~]# lvreduce --size 25G cs_node01/virt_puppet01
WARNING: Reducing active logical volume to 25.00 GiB.
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce cs_node01/virt_puppet01? [y/n]: y
Size of logical volume cs_node01/virt_puppet01 changed from 150.00 GiB (38400 extents) to 25.00 GiB (6400 extents).
Logical volume cs_node01/virt_puppet01 successfully resized.
[root@virt01 ~]# virsh start puppet01
Domain 'puppet01' started |
closing as completed now |
No description provided.
The text was updated successfully, but these errors were encountered: