-
Notifications
You must be signed in to change notification settings - Fork 152
Elasticsearch container will crash under default Docker for Mac configuration #6
Comments
Hi there! Can't confirm, I've just ran this compose on two macs with the limitations of 1gb RAM for docker. Haven't seen mentioned error. |
I'll provide screenshots in my next morning, in 12 hours. BTW I am not arguing, just saying that I can't reproduce it on two machines:) |
Completely understand. What we'll find is one of us has a difference our setup that will help find the true cause. |
This post helped a ton. I was running some other things on docker like Jenkins and MySql in combination with Elastic Search and Kibana. Pretty sure I hit the limit. I upped my memory to 4gb no issues now. Thanks |
@pjrola thank you so much, that really saved me :) I also updated the memory on the docker engine (docker menu -> preference) and that did the trick ! |
Thanks a bunch! Helped me save so much time! |
I have seen an issue where Elasticsearch crashes when I start Logstash. I saw an OutOfMemoryError once and adjusted the heap size (1536m) and the Docker limit (3g). Now when I start Logstash, Elasticsearch just abruptly dies; there is nothing in the log at all. Oddly, just a few times I've gotten it to work (the Logstash conf file is very simple, just sends stdin to Elasticsearch), so I can't fathom what's happening. I also see that Logstash hogs a lot of the CPU while it's starting. |
Update memory used to 4Gigs in Advanced Settings tab. |
This has not worked for me. I'm constantly getting crashes. |
This post helped me too. I am running Cassandra and the other node was keep getting exited. I increased the RAM and it worked. |
This post helped me a lot with a completely different project. I didn't know about the memory problem. Saved me a ton of time! Thank you, @dustinrue ! |
I've just been troubleshooting why an Elasticsearch based project wouldn't work and this was exactly the issue! Thanks for highlighting this. |
I upped my memory to be |
@mystredesign the point is to reduce the amount of memory the JVM is allowed to use in relation to the amount of memory Docker is allowed to use. If you reduce the amount of memory the JVM is allowed to use and you still have issues then it'll be based on the number of docs you are trying to index. |
The default Docker for Mac (and presumably Windows as well) configuration limits Docker to 2GB of memory. The default heap size for Elasticsearch is also 2GB which means the Elasticsearch container will immediately exit with a potentially confusing "error 137" (it's killed due to out of memory) as soon as it is interacted with, particularly with Cerebro.
Suggestion is to either note that you must configure Docker for Mac/Windows to allow for more than 2GB of memory in the readme or limit the heap size in Elasticsearch with:
Or add it to the existing config file.
The text was updated successfully, but these errors were encountered: