Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

elasticsearch "curl: (52) Empty reply from server" on port 9200, Unable to revive connection #123

Closed
GabLeRoux opened this issue Mar 22, 2017 · 11 comments

Comments

@GabLeRoux
Copy link

GabLeRoux commented Mar 22, 2017

Hi there,
I was playing around trying elk. I like how well documented this project is so I gave it a try, but I can't seem to get the kibana to connect to elasticsearch.

docker-compose.yml

elk:
  image: sebp/elk
  ports:
    - "5601:5601"
    - "9200:9200"
    - "5044:5044"
docker-compose start
Creating elk_elk_1
Attaching to elk_elk_1
elk_1  |  * Starting periodic command scheduler cron
elk_1  |    ...done.
elk_1  |  * Starting Elasticsearch Server
elk_1  |    ...done.
elk_1  | waiting for Elasticsearch to be up (1/30)
elk_1  | waiting for Elasticsearch to be up (2/30)
elk_1  | waiting for Elasticsearch to be up (3/30)
elk_1  | waiting for Elasticsearch to be up (4/30)
elk_1  | waiting for Elasticsearch to be up (5/30)
elk_1  | waiting for Elasticsearch to be up (6/30)
elk_1  | waiting for Elasticsearch to be up (7/30)
elk_1  | waiting for Elasticsearch to be up (8/30)
elk_1  | waiting for Elasticsearch to be up (9/30)
elk_1  | waiting for Elasticsearch to be up (10/30)
elk_1  | waiting for Elasticsearch to be up (11/30)
elk_1  | Waiting for Elasticsearch cluster to respond (1/30)
elk_1  | logstash started.
elk_1  |  * Starting Kibana5
elk_1  |    ...done.
elk_1  | ==> /var/log/elasticsearch/elasticsearch.log <==
elk_1  | [2017-03-22T14:02:00,244][INFO ][o.e.n.Node               ] initialized
elk_1  | [2017-03-22T14:02:00,244][INFO ][o.e.n.Node               ] [FbTWY_Y] starting ...
elk_1  | [2017-03-22T14:02:00,440][WARN ][i.n.u.i.MacAddressUtil   ] Failed to find a usable hardware address from the network interfaces; using random bytes: df:c0:09:3f:36:1d:d2:35
elk_1  | [2017-03-22T14:02:00,561][INFO ][o.e.t.TransportService   ] [FbTWY_Y] publish_address {172.17.0.3:9300}, bound_addresses {[::]:9300}
elk_1  | [2017-03-22T14:02:00,569][INFO ][o.e.b.BootstrapChecks    ] [FbTWY_Y] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
elk_1  | [2017-03-22T14:02:01,260][INFO ][o.e.m.j.JvmGcMonitorService] [FbTWY_Y] [gc][1] overhead, spent [389ms] collecting in the last [1s]
elk_1  | [2017-03-22T14:02:03,665][INFO ][o.e.c.s.ClusterService   ] [FbTWY_Y] new_master {FbTWY_Y}{FbTWY_YSQVaJIdwTotWM4w}{Bic4-h47RaWjLMWzrlxIYA}{172.17.0.3}{172.17.0.3:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
elk_1  | [2017-03-22T14:02:03,704][INFO ][o.e.h.HttpServer         ] [FbTWY_Y] publish_address {172.17.0.3:9200}, bound_addresses {[::]:9200}
elk_1  | [2017-03-22T14:02:03,704][INFO ][o.e.n.Node               ] [FbTWY_Y] started
elk_1  | [2017-03-22T14:02:03,930][INFO ][o.e.g.GatewayService     ] [FbTWY_Y] recovered [0] indices into cluster_state
elk_1  |
elk_1  | ==> /var/log/logstash/logstash-plain.log <==
elk_1  |
elk_1  | ==> /var/log/kibana/kibana5.log <==
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:37Z","tags":["status","plugin:kibana@5.2.1","info"],"pid":195,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:37Z","tags":["status","plugin:elasticsearch@5.2.1","info"],"pid":195,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:37Z","tags":["error","elasticsearch","admin"],"pid":195,"message":"Request error, retrying\nHEAD http://localhost:9200/ => connect ECONNREFUSED 127.0.0.1:9200"}
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:37Z","tags":["status","plugin:console@5.2.1","info"],"pid":195,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:38Z","tags":["warning","elasticsearch","admin"],"pid":195,"message":"Unable to revive connection: http://localhost:9200/"}
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:38Z","tags":["warning","elasticsearch","admin"],"pid":195,"message":"No living connections"}
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:38Z","tags":["status","plugin:elasticsearch@5.2.1","error"],"pid":195,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://localhost:9200.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:38Z","tags":["status","plugin:timelion@5.2.1","info"],"pid":195,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:38Z","tags":["listening","info"],"pid":195,"message":"Server running at http://0.0.0.0:5601"}
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:38Z","tags":["status","ui settings","error"],"pid":195,"state":"red","message":"Status changed from uninitialized to red - Elasticsearch plugin is red","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:40Z","tags":["warning","elasticsearch","admin"],"pid":195,"message":"Unable to revive connection: http://localhost:9200/"}
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:40Z","tags":["warning","elasticsearch","admin"],"pid":195,"message":"No living connections"}
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:43Z","tags":["warning","elasticsearch","admin"],"pid":195,"message":"Unable to revive connection: http://localhost:9200/"}
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:43Z","tags":["warning","elasticsearch","admin"],"pid":195,"message":"No living connections"}
elk_1  |
elk_1  | ==> /var/log/logstash/logstash-plain.log <==
elk_1  | [2017-03-22T14:02:45,754][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/opt/logstash/data/queue"}
elk_1  | [2017-03-22T14:02:45,781][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"30c88686-44c5-40c8-82be-a4be5d8ca34b", :path=>"/opt/logstash/data/uuid"}
elk_1  |
elk_1  | ==> /var/log/kibana/kibana5.log <==
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:45Z","tags":["warning","elasticsearch","admin"],"pid":195,"message":"Unable to revive connection: http://localhost:9200/"}
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:45Z","tags":["warning","elasticsearch","admin"],"pid":195,"message":"No living connections"}
elk_1  |
elk_1  | ==> /var/log/logstash/logstash-plain.log <==
elk_1  | [2017-03-22T14:02:47,046][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
elk_1  | [2017-03-22T14:02:47,567][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
elk_1  | [2017-03-22T14:02:47,568][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
elk_1  | [2017-03-22T14:02:47,769][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0xb88b78 URL:http://localhost:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
elk_1  | [2017-03-22T14:02:47,771][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x630efdb0 URL://localhost>]}
elk_1  | [2017-03-22T14:02:47,936][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
elk_1  | [2017-03-22T14:02:47,948][INFO ][logstash.pipeline        ] Pipeline main started
elk_1  | [2017-03-22T14:02:48,006][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
elk_1  |
elk_1  | ==> /var/log/kibana/kibana5.log <==
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:48Z","tags":["warning","elasticsearch","admin"],"pid":195,"message":"Unable to revive connection: http://localhost:9200/"}
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:48Z","tags":["warning","elasticsearch","admin"],"pid":195,"message":"No living connections"}
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:50Z","tags":["warning","elasticsearch","admin"],"pid":195,"message":"Unable to revive connection: http://localhost:9200/"}
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:50Z","tags":["warning","elasticsearch","admin"],"pid":195,"message":"No living connections"}
elk_1  |
elk_1  | ==> /var/log/logstash/logstash-plain.log <==
elk_1  | [2017-03-22T14:02:52,778][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
elk_1  | [2017-03-22T14:02:52,790][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0xb88b78 URL:http://localhost:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
elk_1  |
elk_1  | ==> /var/log/kibana/kibana5.log <==
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:53Z","tags":["warning","elasticsearch","admin"],"pid":195,"message":"Unable to revive connection: http://localhost:9200/"}
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:53Z","tags":["warning","elasticsearch","admin"],"pid":195,"message":"No living connections"}
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:55Z","tags":["warning","elasticsearch","admin"],"pid":195,"message":"Unable to revive connection: http://localhost:9200/"}
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:55Z","tags":["warning","elasticsearch","admin"],"pid":195,"message":"No living connections"}
elk_1  |
elk_1  | ==> /var/log/logstash/logstash-plain.log <==
elk_1  | [2017-03-22T14:02:57,793][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
elk_1  | [2017-03-22T14:02:57,798][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0xb88b78 URL:http://localhost:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
elk_1  |
elk_1  | ==> /var/log/kibana/kibana5.log <==
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:58Z","tags":["warning","elasticsearch","admin"],"pid":195,"message":"Unable to revive connection: http://localhost:9200/"}
elk_1  | {"type":"log","@timestamp":"2017-03-22T14:02:58Z","tags":["warning","elasticsearch","admin"],"pid":195,"message":"No living connections"}

I haven't changed any configuration.

elk docker-compose run elk cat /etc/elasticsearch/elasticsearch.yml | grep network.host
# network.host: 192.168.0.1
network.host: 0.0.0.0

Everything seems right, /var/log/elasticsearch/elasticsearch.log says it started.

From inside the docker contianer, I get a connection refused on port 9200:

docker-compose run elk curl -XGET 127.0.0.1:9200
curl: (7) Failed to connect to 127.0.0.1 port 9200: Connection refused
docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                                                              NAMES
036f97fbd4a3        sebp/elk            "/usr/local/bin/st..."   15 minutes ago      Up 15 minutes       0.0.0.0:5044->5044/tcp, 0.0.0.0:5601->5601/tcp, 0.0.0.0:9200->9200/tcp, 9300/tcp   elk_elk_1
docker-compose ps
  Name              Command           State                                        Ports
------------------------------------------------------------------------------------------------------------------------------
elk_elk_1   /usr/local/bin/start.sh   Up      0.0.0.0:5044->5044/tcp, 0.0.0.0:5601->5601/tcp, 0.0.0.0:9200->9200/tcp, 9300/tcp

kibana-does-start-tho

I'm not sure what I did wrong.
Running Docker on macOS Sierra 10.12.3

docker --version
Docker version 17.03.0-ce, build 60ccb22
docker-compose --version
docker-compose version 1.11.2, build dfed245

I also tried with run instead of start and still fails.

docker-compose run
@spujadas
Copy link
Owner

Very strange indeed. Could you try to curl at 172.17.0.3:9200 using docker run? It's possible that for some reason Elasticsearch can't/won't/didn't bind to the loopback interface.
(Oh, and unfortunately I don't have access to a Mac so I can't reproduce on my side, but I do know that users have successfully used this image on Macs.)

@ghost
Copy link

ghost commented Mar 23, 2017

Same problem on MAC. Interesting thing is if I start only Kibana and Elasticsearch, everything is fine

@GabLeRoux
Copy link
Author

GabLeRoux commented Mar 24, 2017

@spujadas

docker-compose run elk curl -XGET 172.17.0.3:9200
curl: (7) Failed to connect to 172.17.0.3 port 9200: Connection refused

@spujadas
Copy link
Owner

@GabLeRoux OK thanks.
Is Elasticsearch actually still running in the container? (Looking at #123 (comment) it could be silently killed if running out of memory – at least on Linux it would be killed by the OOM killer, see e.g. #17 and #57)

@geertvanheusden
Copy link

geertvanheusden commented Mar 24, 2017

@GabLeRoux & @yzhang-myob, I had the same issue with Docker for Mac yesterday. Apparently after installing one of the updates the memory settings were reset and it was only allocating 2GB of RAM. After increasing it to 6GB it started working again.

Unfortunately there is no way to see this in the logs or am I wrong @spujadas ?

@spujadas
Copy link
Owner

@geertvanheusden That's correct the logs wouldn't show ES being killed by the OOM killer (workaround that isn't included in the container: http://stackoverflow.com/a/624868/2654646, see #57).

@GabLeRoux
Copy link
Author

GabLeRoux commented Mar 24, 2017

That totally makes sense.

I tried a few solutions from http://stackoverflow.com/a/624868/2654646 to see if it gets killed by OOM, but I couldn't find a "process killed" message anywhere.

I used docker stats elk_elk_1 while it's starting and indeed, from the moment it starts failing, I see a significant drop in used memory.

Starting:

CONTAINER           CPU %               MEM USAGE / LIMIT       MEM %               NET I/O             BLOCK I/O           PIDS
elk_elk_1           22.01%              1.796 GiB / 1.952 GiB   92.03%              718 B / 718 B       3.63 GB / 191 MB    81

Then it fails:

CONTAINER           CPU %               MEM USAGE / LIMIT       MEM %               NET I/O             BLOCK I/O           PIDS
elk_elk_1           19.26%              609.7 MiB / 1.952 GiB   30.50%              2.61 kB / 1.42 kB   14.9 GB / 217 MB    57

Here I found how to increase docker memory on macOS:
http://stackoverflow.com/a/39720010/1092815

I moved from 2gb to 6gb too and it also worked :)

elk_1  | [2017-03-24T16:25:56,395][INFO ][logstash.pipeline        ] Pipeline main started
elk_1  | [2017-03-24T16:25:56,439][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

The stats when everything started ok:

CONTAINER           CPU %               MEM USAGE / LIMIT       MEM %               NET I/O             BLOCK I/O           PIDS
elk_elk_1           7.77%               2.917 GiB / 5.818 GiB   50.14%              1.64 kB / 788 B     160 MB / 2.19 MB    111

We would indeed need a way to see OOM Killer in the logs and the memory requirements should probably be written somewhere in the docs or the readme.
Thanks for your help 👍

@spujadas
Copy link
Owner

Thanks for the comprehensive feedback, I'll work as much of it as I can in the docs (these memory requirements keep growing and growing!), and in the image if I can find a way of detecting when ES gets killed.
Leaving this open in the meantime.

@adamhp
Copy link

adamhp commented Aug 22, 2017

@spujadas I am still experiencing this behavior. Downloaded the standard sebp/elk image, created a docker-compose.yml according to http://elk-docker.readthedocs.io/#running-with-docker-compose, and I am still encountering:

elk_1  | {"type":"log","@timestamp":"2017-08-22T12:13:28Z","tags":["warning","elasticsearch","admin"],"pid":198,"message":"Unable to revive connection: http://localhost:9200/"}
elk_1  | {"type":"log","@timestamp":"2017-08-22T12:13:28Z","tags":["warning","elasticsearch","admin"],"pid":198,"message":"No living connections"}

After a few minutes, I noticed some brief changes in the logs:

elk_1  | {"type":"log","@timestamp":"2017-08-22T12:07:30Z","tags":["warning","elasticsearch","admin"],"pid":196,"message":"No living connections"}
elk_1  | {"type":"log","@timestamp":"2017-08-22T12:10:42Z","tags":["status","plugin:kibana@5.5.1","info"],"pid":198,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1  | {"type":"log","@timestamp":"2017-08-22T12:10:42Z","tags":["status","plugin:elasticsearch@5.5.1","info"],"pid":198,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1  | {"type":"log","@timestamp":"2017-08-22T12:10:42Z","tags":["status","plugin:console@5.5.1","info"],"pid":198,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1  | {"type":"log","@timestamp":"2017-08-22T12:10:43Z","tags":["status","plugin:metrics@5.5.1","info"],"pid":198,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1  | {"type":"log","@timestamp":"2017-08-22T12:10:43Z","tags":["status","plugin:timelion@5.5.1","info"],"pid":198,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1  | {"type":"log","@timestamp":"2017-08-22T12:10:43Z","tags":["listening","info"],"pid":198,"message":"Server running at http://0.0.0.0:5601"}
elk_1  | {"type":"log","@timestamp":"2017-08-22T12:10:43Z","tags":["status","ui settings","info"],"pid":198,"state":"yellow","message":"Status changed from uninitialized to yellow - Elasticsearch plugin is yellow","prevState":"uninitialized","prevMsg":"uninitialized"}
elk_1  | {"type":"log","@timestamp":"2017-08-22T12:10:48Z","tags":["status","plugin:elasticsearch@5.5.1","info"],"pid":198,"state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
elk_1  |
elk_1  | ==> /var/log/elasticsearch/elasticsearch.log <==
elk_1  | [2017-08-22T12:10:49,277][INFO ][o.e.c.m.MetaDataCreateIndexService] [oVT8f59] [.kibana] creating index, cause [api], templates [], shards [1]/[1], mappings [_default_, index-pattern, server, visualization, search, timelion-sheet, config, dashboard, url]
elk_1  |
elk_1  | ==> /var/log/kibana/kibana5.log <==
elk_1  | {"type":"log","@timestamp":"2017-08-22T12:10:49Z","tags":["error","elasticsearch","admin"],"pid":198,"message":"Request error, retrying\nPUT http://localhost:9200/.kibana => socket hang up"}
elk_1  | {"type":"log","@timestamp":"2017-08-22T12:10:49Z","tags":["warning","elasticsearch","admin"],"pid":198,"message":"Unable to revive connection: http://localhost:9200/"}
elk_1  | {"type":"log","@timestamp":"2017-08-22T12:10:49Z","tags":["warning","elasticsearch","admin"],"pid":198,"message":"No living connections"}

It appears that elasticsearch is starting just fine, but the connection is refused.

elk_1  |  * Starting Elasticsearch Server
elk_1  |    ...done.
elk_1  | waiting for Elasticsearch to be up (1/30)
elk_1  | waiting for Elasticsearch to be up (2/30)
elk_1  | waiting for Elasticsearch to be up (3/30)
elk_1  | waiting for Elasticsearch to be up (4/30)
elk_1  | waiting for Elasticsearch to be up (5/30)
elk_1  | waiting for Elasticsearch to be up (6/30)
elk_1  | waiting for Elasticsearch to be up (7/30)
elk_1  | waiting for Elasticsearch to be up (8/30)
elk_1  | waiting for Elasticsearch to be up (9/30)
elk_1  | waiting for Elasticsearch to be up (10/30)
elk_1  | waiting for Elasticsearch to be up (11/30)
elk_1  | Waiting for Elasticsearch cluster to respond (1/30)
elk_1  | logstash started.
elk_1  |  * Starting Kibana5
elk_1  |    ...done.
elk_1  | ==> /var/log/elasticsearch/elasticsearch.log <==
elk_1  | [2017-08-22T12:22:47,133][INFO ][o.e.p.PluginsService     ] [oVT8f59] no plugins loaded
elk_1  | [2017-08-22T12:22:51,384][INFO ][o.e.d.DiscoveryModule    ] [oVT8f59] using discovery type [zen]
elk_1  | [2017-08-22T12:22:52,664][INFO ][o.e.n.Node               ] initialized
elk_1  | [2017-08-22T12:22:52,665][INFO ][o.e.n.Node               ] [oVT8f59] starting ...
elk_1  | [2017-08-22T12:22:52,970][INFO ][o.e.t.TransportService   ] [oVT8f59] publish_address {172.17.0.2:9300}, bound_addresses {0.0.0.0:9300}
elk_1  | [2017-08-22T12:22:52,991][INFO ][o.e.b.BootstrapChecks    ] [oVT8f59] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
elk_1  | [2017-08-22T12:22:56,089][INFO ][o.e.c.s.ClusterService   ] [oVT8f59] new_master {oVT8f59}{oVT8f59SSUWvb1v3oadmeA}{EYv5PG93SLGFD6YjoEnHIg}{172.17.0.2}{172.17.0.2:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
elk_1  | [2017-08-22T12:22:56,143][INFO ][o.e.h.n.Netty4HttpServerTransport] [oVT8f59] publish_address {172.17.0.2:9200}, bound_addresses {0.0.0.0:9200}
elk_1  | [2017-08-22T12:22:56,144][INFO ][o.e.n.Node               ] [oVT8f59] started
elk_1  | [2017-08-22T12:22:56,323][INFO ][o.e.g.GatewayService     ] [oVT8f59] recovered [0] indices into cluster_state
elk_1  |

A curl -XGET localhost:9200 results in:

curl: (52) Empty reply from server

Here is my docker-compose.yml:

elk:
  image: sebp/elk
  ports:
    - "5601:5601"
    - "9200:9200"
    - "5044:5044"

Also set MAX_MAP_COUNT to 262144.

Any ideas?

@spujadas
Copy link
Owner

@ahpearce I'm afraid that given the lack of logs, I'm going to assume that your container is running out of memory and killing ES, see e.g. #57.

@creditsoftware
Copy link

Hello Everybody here is the solution. Please try it.
if you run elastic using Docker, you need to authenticate.
curl -u <UserName>:<YOUR-PASSWORD> https://localhost:9200 -k
In my case:
curl -u admin:admin https://localhost:9200 -k

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants