-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enabling Cluster Monitoring through API sets different memory limits than UI #25103
Comments
Seems like we need a small change in the monitoring chart to address fixes in https://github.com/rancher/rancher/issues/19349 such that our monitoring chart matches the latest CPU / Memory defaults set by the UI. |
We'll need to update the following to match: Where the Monitoring chart needs to be updated. |
@prachidamle The default Prometheus memory limit is about to be changed in the UI. Could you match those values in your fix? |
@prachidamle As the PR is going to dev-v2.5, can you actually make a backport issue for 2.5 as well as create a dev-v2.6 PR to link to this issue? |
Pass Verified in
Result: |
What kind of request is this (question/bug/enhancement/feature request):
Bug
Steps to reproduce (least amount of steps as possible):
Enable
buttonView in API
for cluster, clickEdit
in top right corner under operations, check theenableClusterMonitoring
checkbox, clickShow Request
button. ClickSend Request
button.Result:
For clusterA, the prometheus memory limit is 1 CPU. For clusterB, the prometheus memory limit is 500m CPU. These should be the same. Preferably should change the API default to 1 CPU, not 500m.
cluster A:
cluster B:
Other details that may be helpful:
Should make sure all UI and API defaults match.
API defaults appear to come from here for these particular limits:
https://github.com/rancher/system-charts/blob/dev/charts/rancher-monitoring/v0.0.7/values.yaml#L326-L328
Environment information
rancher/rancher
/rancher/server
image tag or shown bottom left in the UI):v2.3.4
single install
Cluster information
Custom
t3a.medium on aws
kubectl version
):docker version
):gz#8107
SURE-1691
The text was updated successfully, but these errors were encountered: