Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Elasticsearch fails for memory related causes #19

Closed
vasyugan opened this issue Oct 13, 2018 · 12 comments
Closed

Elasticsearch fails for memory related causes #19

vasyugan opened this issue Oct 13, 2018 · 12 comments

Comments

@vasyugan
Copy link
Contributor

vasyugan commented Oct 13, 2018

$ node run.js ./database/reindex_elastic.js
Deleting index... uwazi_development
{ FetchError: request to http://elasticsearch:9200/uwazi_development failed, reason: getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200
    at ClientRequest.<anonymous> (/home/node/uwazi/node_modules/node-fetch/index.js:133:11)
    at emitOne (events.js:96:13)
    at ClientRequest.emit (events.js:188:7)
    at Socket.socketErrorListener (_http_client.js:314:9)
    at emitOne (events.js:96:13)
    at Socket.emit (events.js:188:7)
    at connectErrorNT (net.js:1034:8)
    at _combinedTickCallback (internal/process/next_tick.js:80:11)
    at process._tickCallback (internal/process/next_tick.js:104:9)
  name: 'FetchError',
  message: 'request to http://elasticsearch:9200/uwazi_development failed, reason: getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200',
  type: 'system',
  errno: 'ENOTFOUND',
  code: 'ENOTFOUND' }
Creating index... uwazi_development
{ FetchError: request to http://elasticsearch:9200/uwazi_development failed, reason: getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200
    at ClientRequest.<anonymous> (/home/node/uwazi/node_modules/node-fetch/index.js:133:11)
    at emitOne (events.js:96:13)
    at ClientRequest.emit (events.js:188:7)
    at Socket.socketErrorListener (_http_client.js:314:9)
    at emitOne (events.js:96:13)
    at Socket.emit (events.js:188:7)
    at connectErrorNT (net.js:1034:8)
    at _combinedTickCallback (internal/process/next_tick.js:80:11)
    at process._tickCallback (internal/process/next_tick.js:104:9)
  name: 'FetchError',
  message: 'request to http://elasticsearch:9200/uwazi_development failed, reason: getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200',
  type: 'system',
  errno: 'ENOTFOUND',
  code: 'ENOTFOUND' }
Indexing documents and entities... - 0 indexed

@vasyugan
Copy link
Contributor Author

Next try I got this:

$ node run.js ./database/reindex_elastic.js
Deleting index... uwazi_development
{ FetchError: request to http://elasticsearch:9200/uwazi_development failed, reason: connect ECONNREFUSED 172.22.0.2:9200
    at ClientRequest.<anonymous> (/home/node/uwazi/node_modules/node-fetch/index.js:133:11)
    at emitOne (events.js:96:13)
    at ClientRequest.emit (events.js:188:7)
    at Socket.socketErrorListener (_http_client.js:314:9)
    at emitOne (events.js:96:13)
    at Socket.emit (events.js:188:7)
    at emitErrorNT (net.js:1290:8)
    at _combinedTickCallback (internal/process/next_tick.js:80:11)
    at process._tickCallback (internal/process/next_tick.js:104:9)
  name: 'FetchError',
  message: 'request to http://elasticsearch:9200/uwazi_development failed, reason: connect ECONNREFUSED 172.22.0.2:9200',
  type: 'system',
  errno: 'ECONNREFUSED',
  code: 'ECONNREFUSED' }
Creating index... uwazi_development
{ FetchError: request to http://elasticsearch:9200/uwazi_development failed, reason: connect ECONNREFUSED 172.22.0.2:9200
    at ClientRequest.<anonymous> (/home/node/uwazi/node_modules/node-fetch/index.js:133:11)
    at emitOne (events.js:96:13)
    at ClientRequest.emit (events.js:188:7)
    at Socket.socketErrorListener (_http_client.js:314:9)
    at emitOne (events.js:96:13)
    at Socket.emit (events.js:188:7)
    at emitErrorNT (net.js:1290:8)
    at _combinedTickCallback (internal/process/next_tick.js:80:11)
    at process._tickCallback (internal/process/next_tick.js:104:9)
  name: 'FetchError',
  message: 'request to http://elasticsearch:9200/uwazi_development failed, reason: connect ECONNREFUSED 172.22.0.2:9200',
  type: 'system',
  errno: 'ECONNREFUSED',
  code: 'ECONNREFUSED' }
Indexing documents and entities... - 0 indexed
Done, took 0.174 seconds

@fititnt
Copy link
Owner

fititnt commented Oct 14, 2018

Ah, ok. Maybe this is more about how the docker setup was done than the uwazi-docker itself, but lets go a bit deeper on this.

  1. One thing that could maybe help, is you do a full unninstall and try again. See https://github.com/fititnt/uwazi-docker/blob/master/uninstall.md.
  2. Each time you try something new, do a command, docker ps, to see if both mongo and elasticsearch are online running, if they for some reason are not online, this issue could happen. If not, maybe is problem with docker networking, but lets start by the simplest issue.
  3. (optimal) If you do not have other serious applications running on the machine you are using docker, if the worst case scenario, a full reinstall of docker could be more simple than go very deep debug the networking problems.

As reference, a example of docker ps output :

$ docker ps
CONTAINER ID        IMAGE                                                 COMMAND                  CREATED             STATUS              PORTS                              NAMES
9a9f7463ad0e        uwazi-docker_uwazi                                    "/docker-entrypoint.…"   25 hours ago        Up 25 hours         0.0.0.0:3000->3000/tcp             uwazi-docker_uwazi_1
f23f72e4688b        docker.elastic.co/elasticsearch/elasticsearch:5.5.3   "elasticsearch -Expa…"   26 hours ago        Up 26 hours         0.0.0.0:9200->9200/tcp, 9300/tcp   uwazi-docker_elasticsearch_1
c4db72421df0        mongo:3.4                                             "docker-entrypoint.s…"   26 hours ago        Up 26 hours         0.0.0.0:27017->27017/tcp           uwazi-docker_mongo_1

``'

@vasyugan
Copy link
Contributor Author

Trying again, now halfway through a docker course on Udemy.
Now that I have some basic understanding of how docker works, it won't be complete fumbling in the dark...

@vasyugan vasyugan changed the title uwazi can't contact elasticsearch because of failed name resolution uwazi can't contact elasticsearch Oct 14, 2018
@vasyugan
Copy link
Contributor Author

So the issue is still there, and docker ps shows that the elasticsearch container is somehow stuck in a state of restarting. That's why the elasticsearch engine can't be contacted.
I have extracted a log with docker container logs and uploaded it to https://gist.github.com/vasyugan/8d4e8f3915c18387d79a9887359a3b1d if you want do have a look.

but first, here is the output of docker container ls

jr@erwin:~$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d751cf1cf632 mongo:3.4 "docker-entrypoint.s…" 43 minutes ago Up 43 minutes 0.0.0.0:27017->27017/tcp uwazi-docker_mongo_1
4cb2e71fc7e1 docker.elastic.co/elasticsearch/elasticsearch:5.5.3 "elasticsearch -Expa…" 43 minutes ago Restarting (78) 11 seconds ago uwazi-docker_elasticsearch_1

@vasyugan vasyugan reopened this Oct 14, 2018
@vasyugan
Copy link
Contributor Author

actually, in the logs it looks like java is complaining a lot about too little memory

[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

Not sure if any parameter can be added to the docker-compose.yaml to fix this, or if would be possible to have an elasticsearch container with oracle's java instead. I know it is proprietary, but it behaves much better, especially with reagards to memory.

@vasyugan
Copy link
Contributor Author

well, it seems this is the issue docker-library/elasticsearch#111

Only, what to do about it? In my experience oracle java behaves much better than openjdk when it comes to memory, but that's proprietary

@vasyugan vasyugan changed the title uwazi can't contact elasticsearch Elasticsearch fails for memory related causes Oct 15, 2018
@vasyugan
Copy link
Contributor Author

vasyugan commented Oct 15, 2018

For now, how I got around it was by

  • adding vm.max_map_count=262144 to /etc/sysctl.con
  • removing -Ebootstrap.memory_lock=true

from the elasticsearch command line. Not sure what side effects this has, but this at least made elasticsearch run

@fititnt
Copy link
Owner

fititnt commented Oct 16, 2018

@vasyugan can you please give more info about where are you running the docker? I mean, in special how much memory have and operational system.

And yes, ElasticSearch is specially complicated. I'm not sure if when I tested (outside of my machine, that have 16GB ram) some of these configurations where made for allow run with the minimum memory by default. But even if no change is need, at least we could document this behavior.

@vasyugan
Copy link
Contributor Author

vasyugan commented Oct 16, 2018 via email

@fititnt
Copy link
Owner

fititnt commented Oct 16, 2018

I'm testing remove the removing -Ebootstrap.memory_lock=true from docker-compose-yml. If it runs will push to master.

About the "vm.max_map_count=262144" part, maybe just document this in some place. As reference here the documentation on this topic https://www.elastic.co/guide/en/elasticsearch/reference/5.5/docker.html#docker-cli-run-prod-mode.

@fititnt
Copy link
Owner

fititnt commented Oct 16, 2018

Ah, @vasyugan, since you have way more RAM (to a point of really deploy everything in one or more cluster) depending of changes we just make one aditional file (instead of changing the default one) as specified here https://docs.docker.com/compose/production/.

The final file does not need to be perfect, but at least could be a better baseline for later. At least for MongoDB and ElasticSearch is possible to find recommended configurations to run as production with these DBs in cluster.

@fititnt
Copy link
Owner

fititnt commented Dec 1, 2018

The removing -Ebootstrap.memory_lock=true was added to the docker-compose file, and since last updates we do not nave new comments.

Please if problems persists, we or reopen this issue or we open a new one and, if relevant, mention this one

@fititnt fititnt closed this as completed Dec 1, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants