Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enabling Cluster Monitoring through API sets different memory limits than UI #25103

Closed
dnoland1 opened this issue Jan 24, 2020 · 5 comments
Closed

Comments

@dnoland1
Copy link
Contributor

dnoland1 commented Jan 24, 2020

What kind of request is this (question/bug/enhancement/feature request):
Bug

Steps to reproduce (least amount of steps as possible):

  1. Create clusterA (default options, using 3 nodes as worker/cp/etcd)
  2. Enable cluster monitoring in UI with default values by going to Tools -> Monitoring, and clicking Enable button
  3. Create clusterB (default options, using 3 nodes as worker/cp/etcd)
  4. Enable cluster monitoring in API by selecting View in API for cluster, click Edit in top right corner under operations, check the enableClusterMonitoring checkbox, click Show Request button. Click Send Request button.

Result:
For clusterA, the prometheus memory limit is 1 CPU. For clusterB, the prometheus memory limit is 500m CPU. These should be the same. Preferably should change the API default to 1 CPU, not 500m.

cluster A:

> kubectl get statefulset -A -o yaml | grep limits -A2
            limits:
              cpu: "1"
              memory: 1000Mi
--
            limits:
              cpu: 100m
              memory: 25Mi
--
            limits:
              cpu: 100m
              memory: 25Mi
--
            limits:
              cpu: 100m
              memory: 100Mi
--
            limits:
              cpu: 500m
              memory: 200Mi

cluster B:

> kubectl get statefulset -A -o yaml | grep limits -A2
            limits:
              cpu: "1"
              memory: 500Mi    <----- this does not match
--
            limits:
              cpu: 100m
              memory: 25Mi
--
            limits:
              cpu: 100m
              memory: 25Mi
--
            limits:
              cpu: 100m
              memory: 100Mi
--
            limits:
              cpu: 500m
              memory: 200Mi

Other details that may be helpful:
Should make sure all UI and API defaults match.

API defaults appear to come from here for these particular limits:
https://github.com/rancher/system-charts/blob/dev/charts/rancher-monitoring/v0.0.7/values.yaml#L326-L328

Environment information

  • Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI):
    v2.3.4
  • Installation option (single install/HA):
    single install

Cluster information

  • Cluster type (Hosted/Infrastructure Provider/Custom/Imported):
    Custom
  • Machine type (cloud/VM/metal) and specifications (CPU/memory):
    t3a.medium on aws
  • Kubernetes version (use kubectl version):
> kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"d647ddbd755faf07169599a625faf302ffc34458", GitTreeState:"clean", BuildDate:"2019-10-02T17:01:15Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:22:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
  • Docker version (use docker version):
$ docker version
Client: Docker Engine - Community
 Version:           19.03.5
 API version:       1.40
 Go version:        go1.12.12
 Git commit:        633a0ea838
 Built:             Wed Nov 13 07:22:05 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.5
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.12
  Git commit:       633a0ea838
  Built:            Wed Nov 13 07:28:45 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.2.10
  GitCommit:        b34a5c8af56e510852c35414db4c1f4fa6172339
 runc:
  Version:          1.0.0-rc8+dev
  GitCommit:        3e425f80a8c931f88e6d94a8c831b9d5aa481657
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

gz#8107
SURE-1691

@aiyengar2
Copy link
Contributor

Seems like we need a small change in the monitoring chart to address fixes in https://github.com/rancher/rancher/issues/19349 such that our monitoring chart matches the latest CPU / Memory defaults set by the UI.

@jiaqiluo
Copy link
Member

@prachidamle The default Prometheus memory limit is about to be changed in the UI. Could you match those values in your fix?

@deniseschannon
Copy link

@prachidamle As the PR is going to dev-v2.5, can you actually make a backport issue for 2.5 as well as create a dev-v2.6 PR to link to this issue?

@ronhorton
Copy link

Pass Verified in 2.6-head Commit ID 8cc8121

  1. created cluster a
  2. created cluster b
  3. switch to ember UI
  4. enable monitoring v1 via UI for cluster a
  5. enable monitoring v1 via API for cluster b

Result: Limits match when enabled from UI or API

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants