-
-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move stagging dockers (200) to ovh1 #217
Comments
Some proposal to make the switch and keep disk space well dispatched:
We could move ovh2 -> ovh1 (for a total of 234G !)
|
I did synchronize storage of 200 on ovh1 before migration but now disk is full ! Also I got a difference between what proxmox console shows on ovh1 (Usage 98.23% (940.04 GB of 956.97 G) and
I will move more containers:
|
Monitoring was in bad shape, and also ZFS on ovh1 was too high. So I moved monitoring 203 to ovh2. Although I'm a bit sad that now monitoring is on same machine as prod dockers (200). |
It has been stable, closing. |
Something was missing !!! I had to change the default route for VM that I moved (staging and monitoring):
|
Now starting backend container on staging is fast again 🎉 (from 1m20 in good scenarios to 0m12) ! |
We have latency problems from ovh2 to ovh3 which makes using nfs mounts for off-net impractical.
I though we could move the datasets clones to ovh2 (#216) but it's not feasible: disks are 1T too small and the products dataset is 1,5 T (I did misread the size).
But while ovh2 is 10 ms away from ovh3, ovh1 is only 0,12 ms away (this is a 100 fold), so if we move the 200 VM to ovh1, we would be able to use nfs volumes from ovh3.
Task: move some services from ovh1 to ovh2 and move the 200 VM to ovh1.
The text was updated successfully, but these errors were encountered: