Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak in container #445

Closed
sebboer opened this issue Feb 13, 2024 · 2 comments · Fixed by #451
Closed

Memory leak in container #445

sebboer opened this issue Feb 13, 2024 · 2 comments · Fixed by #451
Assignees
Labels
community needs-investigation Further information is needed

Comments

@sebboer
Copy link

sebboer commented Feb 13, 2024

Bug describtion

I deployed kes to kubernetes using the minio operator and noticed that after some time the memory limits are reached for all replicas in the same sequence. This looks like a memory leak to me. Is anything already reported about this?

CleanShot 2024-02-13 at 19 24 57@2x

Additional context

Deployed by minio-operator
Tenant configuration (via kubectl describe tenants.minio.min.io ...):

 Kes:
    Annotations:
    Image:              minio/kes:2024-01-11T13-09-29Z
    Image Pull Policy:  IfNotPresent
    Kes Secret:
      Name:    dc-storage-kes-configuration
    Key Name:  default-minio-key
    Node Selector:
      kubernetes.io/arch:       arm64
      node.kubernetes.io/role:  agent
    Replicas:                   3
    Resources:
      Limits:
        Cpu:     300m
        Memory:  400Mi
      Requests:
        Cpu:     100m
        Memory:  100Mi
    Security Context:
      Fs Group:            1000
      Run As Group:        1000
      Run As Non Root:     true
      Run As User:         1000
Name:               dc-storage-kes
CreationTimestamp:  Tue, 09 Jan 2024 10:28:01 +0100
Selector:           v1.min.io/kes=dc-storage-kes
Labels:             <none>
Annotations:        <none>
Replicas:           3 desired | 3 total
Update Strategy:    RollingUpdate
Pods Status:        3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Containers:
   kes:
    Image:      minio/kes:2024-01-11T13-09-29Z
    Port:       7373/TCP
    Host Port:  0/TCP
    Args:
      server
      --config=/tmp/kes/server-config.yaml
      --auth=off
    Limits:
      cpu:     300m
      memory:  400Mi
    Requests:
      cpu:     100m
      memory:  100Mi
    Environment:
      MINIO_KES_IDENTITY:  XXXX
    Mounts:
      /tmp/kes from dc-storage-kes (rw)
  Volumes:
   dc-storage-kes:
    Type:                Projected (a volume that contains injected data from multiple sources)
    SecretName:          dc-storage-kes-configuration
    SecretOptionalName:  <nil>
    SecretName:          dc-storage-kes-tls
    SecretOptionalName:  <nil>
Volume Claims:           <none>
Events:                  <none>
@aead
Copy link
Member

aead commented Mar 1, 2024

Hi @sebboer such behavior has not been observed for 2024-01-11T13-09-29Z. Which KMS backend are you using Hashicorp Vault or something else?

@aead aead added the needs-investigation Further information is needed label Mar 1, 2024
@sebboer
Copy link
Author

sebboer commented Mar 1, 2024

AWS SecretsManager / AWS-KMS

aead added a commit that referenced this issue Mar 5, 2024
This commit fixes a TCP conn leak in the AWS, GCP, Fortanix and
Gemalto KMS backend. Due to a missing `http.Response.Body.Close`
call, the status check in these backends accumulated TCP connections
that are not closed by the runtime.

This resource leak can cause OOM issues.

Fixes #445

Signed-off-by: Andreas Auernhammer <github@aead.dev>
@aead aead closed this as completed in #451 Mar 5, 2024
aead added a commit that referenced this issue Mar 5, 2024
This commit fixes a TCP conn leak in the AWS, GCP, Fortanix and
Gemalto KMS backend. Due to a missing `http.Response.Body.Close`
call, the status check in these backends accumulated TCP connections
that are not closed by the runtime.

This resource leak can cause OOM issues.

Fixes #445

Signed-off-by: Andreas Auernhammer <github@aead.dev>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
community needs-investigation Further information is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants