-
-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
We have to move monitoring on it's own VM #89
Comments
@CharlesNepote and @cquest do you agree ? |
#81 also have the same issue |
@CharlesNepote could we provision a QEMU VM, possibly on ovh1. I'm not sure how much resource we need. But I can look into it. |
I would give:
@CharlesNepote could we provision this on ovh1 as a QEMU VM ? It will then be quite easy to move the monitoring stack there. |
Own VM or separate hardware outside of the free+OVH bare metals ? |
@cquest do you have a proposition ? If we want to lend a server outside, I would take one either in OVH, different DC or at Scaleway
|
Let's start with OVH1 I guess. We still have 400 GB of disk and 190 GB RAM free on OVH1, let's use it! Be aware it is easy to increase disk space of the VM while decreasing is not. So I would start with a machine with 80 GB and increase it when needed.
@cquest: why a separate machine? |
I think we'd better start on OVH1 and then move if someone sponsors us a VM. |
created the ticket for the VM, I will now create the VM. |
It's done:
|
Describe the bug
All the monitoring infrastructure is on preprod 200 and this is absolutly not a good idea.
Every-time we want to update the preprod data (the zfs clone containing sto), we have to reboot the machine.
Expected behavior
I propose to put monitoring on it's own container, because it is far better when monitoring is independent !
(I propose to gain on 105 or 103 machines resources).
The text was updated successfully, but these errors were encountered: