Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move stagging dockers (200) to ovh1 #217

Closed
alexgarel opened this issue Apr 24, 2023 · 6 comments
Closed

Move stagging dockers (200) to ovh1 #217

alexgarel opened this issue Apr 24, 2023 · 6 comments
Labels
✨ enhancement New feature or request ovh1

Comments

@alexgarel
Copy link
Member

We have latency problems from ovh2 to ovh3 which makes using nfs mounts for off-net impractical.
I though we could move the datasets clones to ovh2 (#216) but it's not feasible: disks are 1T too small and the products dataset is 1,5 T (I did misread the size).

But while ovh2 is 10 ms away from ovh3, ovh1 is only 0,12 ms away (this is a 100 fold), so if we move the 200 VM to ovh1, we would be able to use nfs volumes from ovh3.

Task: move some services from ovh1 to ovh2 and move the 200 VM to ovh1.

@alexgarel
Copy link
Member Author

alexgarel commented Apr 24, 2023

Some proposal to make the switch and keep disk space well dispatched:
We move from ovh2 -> ovh1

  • 200 (294G) dockers-staging

We could move ovh2 -> ovh1 (for a total of 234G !)

  • 111 (96G) wild ecoscore <-- shan't we remove instead ?
  • 120 (64G) mongo-test <-- to be removed isn't it ? YES
  • 108 (32G) folksonomy
  • 112 (42G) connect-stagging

@alexgarel alexgarel changed the title Move stagging docker (200) to ovh1 Move stagging dockers (200) to ovh1 Apr 24, 2023
@alexgarel
Copy link
Member Author

alexgarel commented Apr 27, 2023

I did synchronize storage of 200 on ovh1 before migration but now disk is full !
I underestimated the size (as it's a block storage, I should have considered max size (322G) instead of real size… (294G)).

Also I got a difference between what proxmox console shows on ovh1 (Usage

98.23% (940.04 GB of 956.97 G) and zpool list command:

 rpool   920G   886G  33.8G        -         -    76%    96%  1.00x    ONLINE  -

I will move more containers:

  • 106 (51,54G) mirabelle
  • 110 (45G) crm
  • 103 (34G) mastodon

@alexgarel
Copy link
Member Author

Monitoring was in bad shape, and also ZFS on ovh1 was too high.

So I moved monitoring 203 to ovh2. Although I'm a bit sad that now monitoring is on same machine as prod dockers (200).

@alexgarel
Copy link
Member Author

It has been stable, closing.

@alexgarel
Copy link
Member Author

Something was missing !!!

I had to change the default route for VM that I moved (staging and monitoring):

  1. Edited /etc/network/interfaces to change gateway to the new host internal address (10.0.0.x)
  2. In a screen (to prevent ssh connexion loss between commands !):
    ip route del default via 10.0.0.2 dev ens18 onlink ; ip route add default via 10.0.0.1 dev ens18 onlink (or the other way around)

@alexgarel
Copy link
Member Author

Now starting backend container on staging is fast again 🎉 (from 1m20 in good scenarios to 0m12) !

@teolemon teolemon added the ovh1 label Aug 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
✨ enhancement New feature or request ovh1
Projects
Archived in project
Development

No branches or pull requests

2 participants