New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rook 1.2 Ceph OSD Pod memory consumption very high #5821
Comments
It seems grafana dashboard was wrong. Went into the node with the pod, found out the pod process id and it is taking only less
|
Right Grafana metrics calculation
A Deep Dive into Kubernetes Metrics — Part 3 Container Resource Metrics |
You closed this isuuse, then if you already resolve this problem, Could you tell how to resolve this problem? |
@JieZeng1993 From what I understand, @alexcpn "fixed" the issue by fixing the query for the graph used to monitor the memory usage of Kubernetes Pods (including the OSD Pods). |
Facing the same issue, every osd pod is consuming 4GB RAM on average. Please can anyone let me know why it needs so much ? |
@raj-katonic Did you set memory requests/limits on the OSDs? See this topic. The recommended memory is generally 4GB per osd in production, but smaller clusters could set it lower if needed. But if these limits are not set, the osd will potentially use a lot more memory since it is not aware of any limits. |
@travisn I have a question, why the default_osd_memory_target is 4G, but we need to set requests/limits on the OSDs? And why ceph osd pod memory consumption very high more than 4G? |
I'm not sure about the |
@travisn I have noticed that
|
@microyahoo Did you try setting the resource limits? Setting the resource limits is the recommended way to have the OSDs respect memory usage instead of growing so large like that. |
@travisn No, I didn't set the resource limits. I'm just curious why the |
Existing Related
#5811
#2764 -->
ceph/ceph#26856
Is this a bug report or feature request?
Bug Report
Deviation from expected behaviour:
There is no guideline to set the rook-ceph pod memory limits. So we haven't set any. However, though the internal
osd_memory_target is set as the default 4 GB,** I could see in the ceph osd pool top command that it is taking 8 GB as resident set memory and more as virtual memory
Inside the OSD pod - top command
Grafana dashboard
memory growth - Last 14 days
Expected behavior:
At the max the Ceph-OSD pod should take 4GB for ceph-osd process and say may 1 or 2 GB more for other process running inside the pod
How to reproduce it (minimal and precise):
Running for few days observed this
Environment:
CentOS
uname -a
):(
HP Servers
rook version
inside of a Rook Pod):rook 1.2
ceph -v
):ceph version 14.2.9 (581f22da52345dba46ee232b73b990f06029a2a0) nautilus (stable)
kubectl version
):Kubernetes 1.16
Custom
ceph health
in the Rook Ceph toolbox):HEALTH_WARN crush map has legacy tunables (require firefly, min is hammer); 9 pool(s) have non-power-of-two pg_num; too many PGs per OSD (766 > max 250)
The text was updated successfully, but these errors were encountered: