-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"No ElasticSearch Node Available" #312
Comments
Does it work with |
What does |
Please respond to the issue if the question remains. Otherwise I'll close the issue. Thank you. |
I apologize for the delay. I made a mistake in formatting the markdown here. |
Can you please try what I wrote in the 2nd comment? |
elastic.NewClient(elastic.SetURL("http://127.0.0.1:9200")) works |
What does curl 'http://:9200/_nodes/http?pretty' return? |
{ |
I ended up using SimpleClient and it worked, however I just thought I should let you know that I was having some issues with NewClient. Are there any major differences between the two that could have been the root cause? |
SimpleClient simply disables a few things like automatically finding new nodes added to your cluster by the Sniffing procedure I linked to above. You can disable sniffing via options in NewClient, as described in the Wiki. Not sure what's wrong with your setup. Cluster output is looking good, sniffing should pick up the setting in http_address. |
Ah well, thanks for your help! |
I have the same issue, directly using CURL to my node on localhost works, including the _nodes call mentioned above. I use SetSniffer(false) and it all started working. What information can I provide that would help with debugging? |
This problem is most probably due to Elastic picking up the address returned by Nodes API, which simply isn't routable. This is e.g. the case for ES running inside a Docker container. Does it work with sniffing disabled? Refer to the wiki for details.
|
What is the output from the Nodes API call mentioned earlier?
|
Same issue here. I do some tests with two machines in one LAN. And they can ping each other well. I do
I notice that http bound address "[::]:9200"
But NewClient(elastic.SetURL("http://192.168.0.103:9200")) still not work. Any suggestion? |
@jairwen Which version of elastic are you using? v2, v3 or v5? |
I am using v5.
|
@jairwen Yes, that's Elasticsearch 5.0. But are you using package main
import (
"log"
elastic "gopkg.in/olivere/elastic.v5"
)
func main() {
_, err := elastic.NewClient()
if err != nil {
log.Fatalf("Connect failed: %v", err)
}
log.Print("Connected")
} |
Oh! my bad. Sorry for missing the release of .v5. I still use the .v3 got earlier. |
No worries... you're welcome. It'll probably be FAQ issue no. 1 :-) The problem with the sniffing process (and hence the "no ElasticSearch node available" error) is that Elasticsearch changed the return structure of the Nodes Info API at least 4 times. :-( |
I have this problem with elasticsearch 5 on Arch Linux I'm using elastic.v5 output of {
"_nodes" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"cluster_name" : "elasticsearch",
"nodes" : {
"pb-aU3TCRBKm6MgRqKLnoQ" : {
"name" : "pb-aU3T",
"transport_address" : "192.168.1.209:9300",
"host" : "192.168.1.209",
"ip" : "192.168.1.209",
"version" : "5.0.0",
"build_hash" : "253032b",
"roles" : [
"master",
"data",
"ingest"
],
"http" : {
"bound_address" : [
"[::]:9200"
],
"publish_address" : "192.168.1.209:9200",
"max_content_length_in_bytes" : 104857600
}
}
}
} Edit: restarting the server seems to fix the problem. |
@mahmoudhossam Do you have a reproducible test case for me? |
Unfortunately, no. I'd be happy to supply more information when it happens again, but I'm not sure what causes that exactly. |
This is the most frequently asked question so I've dedicated a Wiki page to
it:
https://github.com/olivere/elastic/wiki/Connection-Problems#how-to-figure-out-connection-problems
The node publishes at 172.18.0.7:9200. Can you connect to that from the
outside?
There's two ways out of this:
Configure your clients so that they're available from the clients outside,
or disable sniffing.
…On 4 December 2016 at 20:38, Stéphane Klein ***@***.***> wrote:
Same issue here:
curl http://localhost:9200/_nodes/http?pretty
{
"_nodes" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"cluster_name" : "log",
"nodes" : {
"Bi1yAbQ0Tv60MwC9sVpfzw" : {
"name" : "Bi1yAbQ",
"transport_address" : "172.18.0.7:9300",
"host" : "172.18.0.7",
"ip" : "172.18.0.7",
"version" : "5.0.2",
"build_hash" : "f6b4951",
"roles" : [
"master",
"data",
"ingest"
],
"http" : {
"bound_address" : [
"[::]:9200"
],
"publish_address" : "172.18.0.7:9200",
"max_content_length_in_bytes" : 104857600
}
}
}
}
ES running in Docker container:
$ elktail --url http://localhost:9200
ERROR: Could not connect Elasticsearch client to http://localhost:9200: no Elasticsearch node available.
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
<#312 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAIOwFZKm8U6Uj9w-piNPLnpBxJbidRbks5rExa4gaJpZM4I9PXB>
.
|
Just for the record: I think Amazon Elastic search does the load balancing to the node itself and that discovering nodes does not return any transport, despite the fact that several nodes are available. Disabling sniffing seems to be the right option in that case. I hope this helps. |
Also affected by this issue. I am using:
Using This the nodes output:
|
@olivere I am using an URL like this: It's a DNS entry with 3 addresses within. |
@gm42 Hmm... can't reproduce locally: Your example code works fine here. What elastic does is try to find the Elasticsearch has changed the return structure of the So if the executable performing HTTP requests is able to connect the 3 nodes If you're using containers, this inability to talk to each other is typically the problem with the "No Elasticsearch node available" error. |
Oh well. I'm on vacation and just flying over the comments. I'm sorry. |
Please, no need to be sorry at all. The package works as expected. I posted something that turned out to be a red herring. My turn to say sorry and wish you a nice vacation! |
Solution One: Solution Two: |
Solution Three: Configure |
I'm running in a large environment, with many ElasticSearch clusters (all v5), all provided by https://www.elastic.co/cloud on AWS (in us-east). Most of our services are written in Java and have no trouble reaching all of our clusters. Recently, I've introduced a small service written in Go and using this library. It works most of the time, but consistent fails to connect to just a few target clusters. Again, these cluster are being actively used by Java services. The clusters are up and reachable. Our connection information is in the same form for the clusters that work and the ones that do not work. Is there any way for I've turned on Error, Info and Trace logging, but the connect failures do not log anything, (successful connections do log activity: so my log setup appears to be correct. Maybe the logging setups only take place after the connection is successful?) they just return the error: |
Looking for differences between the clusters which fail and the one that succeed, the failures appear to be only for our oldest clusters. Clusters that were upgraded in place from Elastic v2 to v5. Maybe that is relevant? 🤷 Filling in some details: the call that returns the return elastic.NewClient(
elastic.SetURL(c.URLs),
elastic.SetBasicAuth(c.User, c.Pass),
elastic.SetSniff(false),
elastic.SetHttpClient(buildHTTPClient(c.Cert)),
elastic.SetRetrier(&elasticRetrier{}),
elastic.SetErrorLog(&elasticLogger{Level: "error"}),
elastic.SetInfoLog(&elasticLogger{Level: "info"}),
elastic.SetTraceLog(&elasticLogger{Level: "trace"}),
) |
Given that this works sometimes, I'd give the following 2 a try:
|
@thoellrich I have tried increasing the health check timeouts (a lot) and can see (from the context of other log messages in my code) that the connection failure takes as long as whatever the timeout is set to. I have not tried completely disabling the health check; that is an interesting idea. |
@bruceadams Hmm... I'm a bit concerned about the fact that the Java clients seem to work fine while this (unofficial) Go client has problems. May I ask if the Java clients still use the Anyway, I've tried to create this library with the same logic that the other official clients use, i.e. health checks and client-side sniffing. Sniffing, if enabled, will use the URL you provide to call into the Elasticsearch Cluster API at startup and find all nodes in the cluster, then use the HTTP(S) addresses returned from that API. Healthchecks, if enabled, will periodically do the same and mark nodes as healthy or dead. It should also log this via the provided loggers, so I'm not exactly sure why your logging doesn't seem to work. In our production clusters (also mostly v5 as of now), our log files clearly log nodes going up and down when e.g. updating a cluster:
We're using Elastic Cloud as well for a specific application, and we initially got a lot of So, if you're sure that the problem is with healthchecks, you could simply disable them and see if it makes a difference. The driver should work fine even without it, as the @thoellrich Thanks for jumping in and helping. I was a bit busy this week, so sorry for the delay. |
@bruceadams What does |
Oh my. False alarm here. Sorry! What a mess. Manually doing various actions works fine. Doing stuff in my real service (written in Go and using this library) fails. It turns out that we also changed how we deal with credentials for clusters many months ago. Older clusters have a different scheme and our Go code (completely unrelated to our use of this library) was mishandling the credentials for our older clusters. The upgrade from Elastic v2 to v5 just happened to occur around the same time that we changed credential handling. The upgraded clusters are irrelevant to the troubles I've had and this library is not misbehaving, other than I couldn't get this library to tell me that Elastic was returning a Maybe I'll open a separate issue for enhanced error messages and/or logging for connection failures? |
Okay. If you found a bug, go ahead and file an issue. I'll close this for now. |
Reopening this issue because I had the same problem and found out how to fix it. So I had most likely a DNS issue with docker containers and linking. If I would use the IP address of the docker container in both the library and cURL it would work (regardless if I disable sniffing or not). This is due to: golang/go#22846 which the I am using an image based on I hope it will help other peeps. |
I fought with this problem for about 3 hours and finally tried the official package. |
I got this error when I provided the bare URL i.e. without the prefix |
I am having the same problem. But, my elastic cluster is running in https://www.elastic.co/ (elastic as a service). When i run my service by docker, i got the error: I tried some things thats was commented. My elastic client configuration already set the sniff and health check to false. client, err := elastic.NewClient(
elastic.SetURL(url),
elastic.SetBasicAuth(c.Username, c.Password),
elastic.SetSniff(false),
elastic.SetHealthcheck(false),
) |
It works for me, thanks. |
i have same issue, but i deploy es with docker; i get error "curl: (52) Empty reply from server" when exec curl 'http://localhost:9200/_nodes/http\?pretty' |
Please use the following questions as a guideline to help me answer
your issue/question without further inquiry. Thank you.
Which version of Elastic are you using?
[ ] elastic.v2 (for Elasticsearch 1.x)
[x ] elastic.v3 (for Elasticsearch 2.x)
Please describe the expected behavior
NewClient(elastic.SetURL("http://:9200")
would correctly generate a new Client object connecting to the node
Please describe the actual behavior
"no ElasticSearch node available"
Any steps to reproduce the behavior?
elastic.NewClient(elastic.SetURL("http://:9200")
The text was updated successfully, but these errors were encountered: