-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible memory leak #41
Comments
Yes, I'm dealing with the same problems. I just restarted one of them around 3.5Gigs of ram usage |
I used Valgrind to run AutoKuma and got this:
I'll try to run further tests. How can I enable the trace logs? |
You can enable trace logs using the env var I'm not sure how useful memcheck is going to be here though. I suspect there's some tokio tasks getting stuck but I still can't reproduce this here... I've added tokio-console support on master, you can run it like this: |
I also saw similar warn messages as @undaunt and I never understand why I get this. So this might be in relation with uptime Kuma connection handling. |
I wanted to add that I'm experiencing the same behavior running on Unraid. Unfortunately no logs. I'll see if I can capture something. |
Getting similar memory leak issues here on version 0.7.0 |
I'm not able to reproduce this on my end, can someone affected either provide a reproducable example (i.e. a docker-compose file e.t.c) or is willing to dissect the issue (by stopping all containers except uptime-kuma/autokuma, see if the issue is gone and then starting them up on-by-one until the issue occurs again) |
Yes absolutely here's my compose version: "3.3"
services:
uptime-kuma:
restart: unless-stopped
image: louislam/uptime-kuma:1
container_name: uptime_kuma
ports:
- "3003:3001"
volumes:
- ./config:/app/data
- /var/run/docker.sock:/var/run/docker.sock
autokuma:
container_name: autokuma
image: ghcr.io/bigboot/autokuma:latest
restart: unless-stopped
environment:
AUTOKUMA__KUMA__URL: http://uptime-kuma:3001
AUTOKUMA__KUMA__USERNAME: <USERNAME>
AUTOKUMA__KUMA__PASSWORD: <PASSWORD>
AUTOKUMA__KUMA__CALL_TIMEOUT: 5
AUTOKUMA__KUMA__CONNECT_TIMEOUT: 5
AUTOKUMA__TAG_NAME: AutoKuma
AUTOKUMA__TAG_COLOR: "#42C0FB"
AUTOKUMA__DEFAULT_SETTINGS: |-
docker.docker_container: {{container_name}}
http.max_redirects: 10
*.max_retries: 3
*.notification_id_list: { "1": true } # Discord
AUTOKUMA__SNIPPETS__DOCKER: |-
{{container_name}}_docker.docker.name: {{container_name}}
{{container_name}}_docker.docker.docker_container: {{container_name}}
{{container_name}}_docker.docker.docker_host: 1
AUTOKUMA__DOCKER__SOCKET: /var/run/docker.sock
depends_on:
uptime-kuma:
condition: service_healthy
volumes:
- /var/run/docker.sock:/var/run/docker.sock
# Getting the leak with and without a container being monitored via snippets
# test:
# container_name: test
# image: busybox
# restart: unless-stopped
# command: sleep infinity
# labels:
# - "kuma.__docker" |
Same issue here, I stopped the container after 15 Go of ram usage. |
By deleting the configuration line by line, I manage to avoid memory leaks with this code : SUPERVISION_AutoKuma_2:
container_name: SUPERVISION_AutoKuma_2
image: ghcr.io/bigboot/autokuma:0.7.0
environment:
AUTOKUMA__KUMA__URL: https://uptime-kuma.domain.com Just add the username + password and the memory leaks will resume. Here are the logs during a memory leak with the RUST_LOG environment variable: "kuma_client=trace, autokuma=trace" :
|
Thank you @ITM-AP I was finally able to find the problem and will issue a fix later today. |
I've been running AutoKuma for a couple of days and I noticed that its memory usage is always increasing:
I'm running the latest version of AutoKuma (0.6.0) on Docker.
The text was updated successfully, but these errors were encountered: