You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I deployed kes to kubernetes using the minio operator and noticed that after some time the memory limits are reached for all replicas in the same sequence. This looks like a memory leak to me. Is anything already reported about this?
Additional context
Deployed by minio-operator
Tenant configuration (via kubectl describe tenants.minio.min.io ...):
Kes:
Annotations:
Image: minio/kes:2024-01-11T13-09-29ZImage Pull Policy: IfNotPresentKes Secret:
Name: dc-storage-kes-configurationKey Name: default-minio-keyNode Selector:
kubernetes.io/arch: arm64node.kubernetes.io/role: agentReplicas: 3Resources:
Limits:
Cpu: 300mMemory: 400MiRequests:
Cpu: 100mMemory: 100MiSecurity Context:
Fs Group: 1000Run As Group: 1000Run As Non Root: trueRun As User: 1000
This commit fixes a TCP conn leak in the AWS, GCP, Fortanix and
Gemalto KMS backend. Due to a missing `http.Response.Body.Close`
call, the status check in these backends accumulated TCP connections
that are not closed by the runtime.
This resource leak can cause OOM issues.
Fixes#445
Signed-off-by: Andreas Auernhammer <github@aead.dev>
This commit fixes a TCP conn leak in the AWS, GCP, Fortanix and
Gemalto KMS backend. Due to a missing `http.Response.Body.Close`
call, the status check in these backends accumulated TCP connections
that are not closed by the runtime.
This resource leak can cause OOM issues.
Fixes#445
Signed-off-by: Andreas Auernhammer <github@aead.dev>
Bug describtion
I deployed kes to kubernetes using the minio operator and noticed that after some time the memory limits are reached for all replicas in the same sequence. This looks like a memory leak to me. Is anything already reported about this?
Additional context
Deployed by minio-operator
Tenant configuration (via kubectl describe tenants.minio.min.io ...):
The text was updated successfully, but these errors were encountered: