-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High memory usage #5440
Comments
@tahmasebi what do you mean by |
@mjdesa, The query execution takes a lot of time, one time, it didn't return anything for 30 minutes, then i cancel the query by |
How many cores does the machine have? |
1 Core, for 2 or 3 seconds cpu usage goes 100% and then return to under 5% when i request a query. the problem I can see in htop is memory usage of influxdb by 97-8%. |
How many unique series are you writing to the database? |
How should I get number of series? I'm new in Influxdb. |
No need to be sorry. :) The query |
@mjdesa , I use
and I get |
@tahmasebi No, you should be well within your limits. We've had problems with
|
Sure, I'll test this version of Influxdb and I'll inform you tomorrow. (It's time difference :) ) |
Hi @mjdesa , Database from previous version exist yet, but when I execute a query on old db, it returns an error I take a screenshot of |
Try reduce cache-snapshot-memory-size and cache-snapshot-write-cold-duration in the configuration. |
I resolved |
@tahmasebi Are you still experiencing this problem? |
I am brand new to influxdb and ran into this problem. I started by inserting 6928 price points for one stock. Ex of data points is such:
Then i did a query like this :
This killed my laptop by consuming all 16Gb of RAM. |
I'm making a similar stock system like @adilbaig, and counting just the same problem with InfluxDB 0.13.0. When I'm tried to insert about 4k trading points all at once using batch insert via http api, InfluxDB just took all of my 8G memory and all swap memory, which made the whole system deadly slow. Is there a way to limit the total memory usage? My OS is Ubuntu 16.04 LTS |
@adilbaig In the example point you listed
|
@dy1901 What does your schema look like? |
what is the result now about the question? , if someone know that,please tell me, thank you |
@DavidSoong128 what specifically would you like to know? |
@mjdesa Thank you for your reply. |
@DavidSoong128
|
this test code is copied from https://github.com/influxdata/influxdb-java, and will |
I'm having a similar problem with memory usage growing to consuming the entire machine and never really dropping back. There are three main data streams of incoming data using the InfluxDB Go client to different databases:
Occasionally, a Grafana client will pull 5-20 queries to draw charts but it isn't a constant request rate. We're recently starting to consume considerable swap space. The node has 128GB (usually between 80-98% of memory used) and has been up for 30d. |
I'm having the exact issues as everyone here. During inserts my memory usage jumps over 8G and then influx throws error about memory allocation failure (my VM is limited to ~8GB RAM). I've tried settings proposed by @lpc921 in his post here:
It didn't change a thing. Guys, seriously. This issue has been open for half a year. For me this is a critical issue which rules out usage of influx on a production environments. EDIT: |
@carbolymer It looks like you have sparse data (stock prices) which ends up creating hundreds of small shards. In your docker sample, I'd recommend increasing the shard group duration on your For example, running the following before writing data:
will change the shard group duration to 10y which should reduce the number of shards from ~1500 to 4. I would also suggest setting |
@carbolymer My server has 48 GB of RAM and 24 high-end CPUs, but it's still not enough for InfluxDB (having 30 GB RAM limit) with just several tens of thousands of daily series (several GB of data). Some queries end after a timeout, other end because Influx reaches the memory limit and crashes. I'm beginning to deeply regret that choice... I have no idea what to do now. |
@adampl What kind of data are your writing and what is writing it? I suspect you have an issue with your schema, but grabbing some profiles when memory is high would help to diagnose:
Also, can you attach the output of the following:
|
@jwilder In my case the data is very sparse - just one point a day in each series - so I've dropped the entire database in order to try that trick with shard duration, and now the data is being loaded again. If the timeouts and crashes don't disappear, I'll provide you with the diagnostics. |
@jwilder Increasing shard duration indeed helped (set to 1000w) - now I don't get OOMs as it takes "only" 5 GB and doesn't go up. Still, requests covering all of the measurement's data take long to complete (15 seconds) - much longer than simply reading all of the measurement's rows from text file, filtering them by tags and aggregating in Python on a single process (2 seconds). |
@adampl What version are you running? |
Version 0.13 on CentOS 7 |
@adampl I'd suggest upgrading to the 1.0beta3 release or latest nightly. There are many query optimization since 0.13. |
Ok, I'll give it a try. Meanwhile, please look into #6994 which is a very serious functional bug IMHO. |
@jwilder Many thanks! It helped. |
@carbolymer Have you solved the problem? please tell me some solutions, thank you |
@DavidSoong128, yes. This worked for me: #5440 (comment) You can find working configuration in the latest commit on master on https://github.com/carbolymer/influxdb-large-memory-proof |
@carbolymer ok, thank you for your reply, i will do some tests。 If there is any question, ask again |
picked some suggestions from influxdata/influxdb#5440 to try to reduce high memory usage from influxdb
Hi,
I install Influxdb on ubuntu server with 4G of memory and use python's requests module to write 10M points into db.
The python script inserts 20k points per second into db successfully but ubuntu's memory goes high until Influxdb use 97% of Ram.
After that I can't query a simple request like "select * from srcipm limit 1".
Even after write process is finished, Influxdb doesn't release memory.
Details:
64bit Ubuntu Server 15.10, 1 Core CPU, 4G RAM
0.9.6
Write query in python:
Where is the problem? should I config something or my query is wrong?
The text was updated successfully, but these errors were encountered: