-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do not use elasticsearch-cloud-kubernetes plugin #9
Comments
Some discussion about this topic has been happening here as well: openshift/origin-aggregated-logging#986 |
I have done some testing with https://github.com/pires/kubernetes-elasticsearch-cluster which does not use I have been doing
http://localhost:9200/_cat/nodes and http://localhost:9200/_cluster/health?pretty always showed the correct number and list of nodes. For example:
When elected master node was deleted the urls http://localhost:9200/_cat/nodes and http://localhost:9200/_cluster/health?pretty where not responding until new master was elected. For example url http://localhost:9200/_nodes/_local?pretty always responded fine. @lukas-vlcek do you know any other use cases which should be tested to verify that the plugin is not needed? |
I have done the same testing with #22 and the behavior was the same. Even master election was much quicker. |
interesting comments from pires/kubernetes-elasticsearch-cluster#69, implemented in pires/docker-elasticsearch-kubernetes@d1907da
|
@pavolloffay can you forward-port this feature to master? |
yes will do |
A documentation on headless services: I think we should also set |
@wozniakjan hi, if you have some time your feedback would be appreciated. |
@pavolloffay it is definitely worth exploring. Some time ago I tried to compare two approaches
and described why I think it may be more robust to have the plugin in openshift/origin-aggregated-logging#986 as @lukas-vlcek already mentioned. I am not entirely sure how headless service works with readiness and liveness probes, and even less sure if we need the probes with ES after all. Maybe an init container could serve the purpose we need. It is usually the broken clusters where we get to see shortcomings of certain solutions. I think @portante has a lot of experience with debugging broken clusters, perhaps he has an idea how to stress test this |
@ruromero could you please reopen? I would like to keep it open, I am planning to do more tests |
Closing fabric8-es-plugin also advises to use headless service https://github.com/fabric8io/elasticsearch-cloud-kubernetes#kubernetes-cloud-plugin-for-elasticsearch |
Can we remove https://github.com/fabric8io/elasticsearch-cloud-kubernetes plugin for unicast discovery?
For example https://github.com/pires/kubernetes-elasticsearch-cluster is using Zen discovery https://www.elastic.co/guide/en/elasticsearch/reference/5.6/modules-discovery-zen.html
There is https://github.com/pires/kubernetes-elasticsearch-cluster/blob/master/es-discovery-svc.yaml which points to master nodes (there is no es-master service). Then all nodes define
discovery.zen.ping.unicast.hosts
pointing to this service.Some links:
The text was updated successfully, but these errors were encountered: