Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak with simple instance prometheus-0.17.0.linux-amd64 #1526

Closed
hvnsweeting opened this Issue Apr 5, 2016 · 4 comments

Comments

Projects
None yet
2 participants
@hvnsweeting
Copy link

hvnsweeting commented Apr 5, 2016

I'm running a single instance of prometheus-0.17.0.linux-amd64, with below config:

global:
  scrape_interval:     15s # By default, scrape targets every 15 seconds.

  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
    monitor: 'codelab-monitor'

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 5s

    target_groups:
      - targets: ['localhost:9090']
  - job_name: 'prometheus_system'
    scrape_interval: 5s
    target_groups:
      - targets: ['localhost:9100']
rule_files:
  - 'alert.rules'

rules:

# cat alert.rules
ALERT memfree_alert_hvn
IF node_memory_MemFree > 10000000

ALERT free_disk
IF (node_filesystem_avail{mountpoint="/", device="/dev/disk/by-label/DOROOT"} / node_filesystem_size * 100) < 20

I ran it with this command:

./prometheus -config.file=config.yml -alertmanager.url=http://localhost:9093

This is graph that show the process leaked memory

prom_memleak

@juliusv

This comment has been minimized.

Copy link
Member

juliusv commented Apr 5, 2016

That's likely not "leaked" memory, but Prometheus's internal chunk buffers slowly filling up (it should level out at some point, but your graph is still fairly low, at 400MB).

If you don't have enough RAM, try tuning the flags mentioned in http://prometheus.io/docs/operating/storage/#memory-usage. Especially storage.local.memory-chunks.

@juliusv

This comment has been minimized.

Copy link
Member

juliusv commented Apr 5, 2016

You can also graph the number of current memory chunks by graphing prometheus_local_storage_memory_chunks, so you can compare that to your memory graph. The default value for the maximum number of memory chunks is 1048576, which will result in a RAM usage of around 3GB.

@hvnsweeting

This comment has been minimized.

Copy link
Author

hvnsweeting commented Apr 5, 2016

Thank you for your explanation, I've updated the starting doc to help next user not get in the same
problem.
prometheus/docs#380

@hvnsweeting hvnsweeting closed this Apr 6, 2016

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 24, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 24, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.