-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High memory usage 0.12.2 #6513
Comments
What does |
It looks like you have about 5M series. Are you running any queries? If so, what are they? |
no. i just adding new metrics.
|
Leaking connections possibly? Are the your writes batched? How big are the batches? How many writers? |
oh i see. in that moment i have a stacktrace in my influx logs |
What is your writer process |
no. Its written in python and i have lots of them. i will provide information about batch size shorly |
discovery process uses libcurl to spool the metrics. Each discovery process (72 at all) spools all collected metrics every 250ms in a single batch. At this time we collect ~100 metrics from 50k objects every 300 seconds and have the plans to increase amount of objects up to 250k So expected size of the batch will be about 4200 lines |
ok. looks like we found and fixed problem with our discovery process. i am unsure weather i have to open another issue or not. but probably the reason is different. then after restart. it begun to compact db again, at the moment influx is still not able to accept metrics. So
i think the first problem is in architecture and can't be fixed in soon, but can i handle the second one somehow ? |
The restart time is a known issue with some datasets. Issue to follow is: #6250. There is lock contention when reloading the in-memory index that slows some datasets down a lot. For the compactions, are you overwriting points or writing to series in the past? Can you provide some sample data that you're writing? |
Possible duplicate of #6243? |
@freeseacher When your heap starts to spike, would you be able to grab a snapshot of the heap and goroutines and attach the output here?
|
#6618 should help start up time some. @freeseacher Are you able to grab a heap and profile using the commands above when you heap starts to spike? |
Bug report
System info: [ version 0.12.2, rhel 7.2]
Steps to reproduce:
ps wavx | grep influx
6940 ? Sl 25:19 109030 5031 120035932 *74729508* 75.6 /usr/bin/influxd -config /etc/influxdb/influxdb.conf
Expected behavior: memory usage under 20G
Actual behavior: memory usage above 75G
with
go tool pprof /bin/influxd http://localhost:8086/debug/pprof/heap
got result in file
usage.zip
my config is just like default one
config.zip
stats and others
stats.zip
logs.zip
Please help to understand why memory usage so extremely high.
The text was updated successfully, but these errors were encountered: