Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Elasticsearch stops after a while #57

Closed
ChengLong opened this issue Jul 19, 2016 · 2 comments
Closed

Elasticsearch stops after a while #57

ChengLong opened this issue Jul 19, 2016 · 2 comments

Comments

@ChengLong
Copy link

Hi,

I'm running sebp/elk:es233_l232_k451 on a micro EC2. I notice that after the container runs for a while (a few hours), the Elasticsearch service inside the container stops.

Here is /var/log/elasticsearch/elasticsearch.log

[2016-07-18 16:07:45,242][INFO ][node                     ] [Chthon] version[2.3.3], pid[34], build[218bdf1/2016-05-17T15:40:04Z]
[2016-07-18 16:07:45,242][INFO ][node                     ] [Chthon] initializing ...
[2016-07-18 16:07:46,005][INFO ][plugins                  ] [Chthon] modules [lang-groovy, reindex, lang-expression], plugins [], sites []
[2016-07-18 16:07:46,046][INFO ][env                      ] [Chthon] using [1] data paths, mounts [[/var/lib/elasticsearch (/dev/disk/by-uuid/23a4139b-b0c3-4cb9-aa4c-620243691435)]], net usable_space [4.8gb], net total_space [7.7gb], spins? [possibly], types [ext4]
[2016-07-18 16:07:46,046][INFO ][env                      ] [Chthon] heap size [1015.6mb], compressed ordinary object pointers [true]
[2016-07-18 16:07:46,046][WARN ][env                      ] [Chthon] max file descriptors [65535] for elasticsearch process likely too low, consider increasing to at least [65536]
[2016-07-18 16:07:48,545][INFO ][node                     ] [Chthon] initialized
[2016-07-18 16:07:48,545][INFO ][node                     ] [Chthon] starting ...
[2016-07-18 16:07:48,684][INFO ][transport                ] [Chthon] publish_address {172.17.0.2:9300}, bound_addresses {[::]:9300}
[2016-07-18 16:07:48,689][INFO ][discovery                ] [Chthon] elasticsearch/jdVkwiCJQTanDdNhCFYLew
[2016-07-18 16:07:51,738][INFO ][cluster.service          ] [Chthon] new_master {Chthon}{jdVkwiCJQTanDdNhCFYLew}{172.17.0.2}{172.17.0.2:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-07-18 16:07:51,757][INFO ][http                     ] [Chthon] publish_address {172.17.0.2:9200}, bound_addresses {[::]:9200}
[2016-07-18 16:07:51,757][INFO ][node                     ] [Chthon] started
[2016-07-18 16:07:51,852][INFO ][gateway                  ] [Chthon] recovered [5] indices into cluster_state
[2016-07-18 16:07:55,029][INFO ][cluster.routing.allocation] [Chthon] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[logstash-2016.07.04][3], [logstash-2016.07.04][3]] ...]).
[2016-07-18 16:08:57,312][INFO ][cluster.metadata         ] [Chthon] [filebeat-2016.07.18] creating index, cause [auto(bulk api)], templates [filebeat], shards [5]/[1], mappings [_default_, access]
[2016-07-18 16:08:57,568][INFO ][cluster.routing.allocation] [Chthon] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[filebeat-2016.07.18][4], [filebeat-2016.07.18][4]] ...]).
[2016-07-18 16:08:57,796][INFO ][cluster.metadata         ] [Chthon] [filebeat-2016.07.18] update_mapping [access]
[2016-07-18 16:08:59,268][INFO ][cluster.metadata         ] [Chthon] [filebeat-2016.07.18] create_mapping [passenger]

But both Kibana and Logstash service are running. I have no idea why this happened. Please help.

@spujadas
Copy link
Owner

spujadas commented Jul 19, 2016

Hi,

First thought is that the container is running out of memory and that Elasticsearch gets killed first.
If you want to confirm this, then docker exec into the container, and – once Elasticsearch stops – check the system logs to see if the OOM killer has indeed killed Elasticsearch (see http://stackoverflow.com/a/624868/2654646 for more info on how to proceed).

I'd suggest that you try running the container on a mini EC2 as 1GB might be insufficient, and see if that works better.

@ChengLong
Copy link
Author

thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants