Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nodes without http_address cause exceptions #210

Closed
costin opened this Issue May 29, 2014 · 6 comments

Comments

Projects
None yet
3 participants
@costin
Copy link
Member

costin commented May 29, 2014

Nodes with http disabled cause a parsing error despite using the "_nodes/http" api.

see #99

@costin costin added bug labels May 29, 2014

costin added a commit that referenced this issue May 29, 2014

Filter out nodes w/o http_address
The _nodes/http api returns all the nodes, whether they have or not
http enabled. The fix filters out the nodes instead of causing a
parsing exception

fix #210
@costin

This comment has been minimized.

Copy link
Member Author

costin commented May 29, 2014

Fixed in master and 2.x branch

@costin costin closed this in 7057f35 May 29, 2014

@costin

This comment has been minimized.

Copy link
Member Author

costin commented May 29, 2014

@ccrivelli The nightly builds for 2.0.x and (2.1.x) will be complete in probably 1h from now. If you don't want to wait you can build everything yourself.
As a side-note, you can enable http on your cluster nodes to prevent the exception from occurring.

@ccrivelli

This comment has been minimized.

Copy link

ccrivelli commented May 30, 2014

Thanks man, you're awesome.
I've downloaded the following version:

elasticsearch-hadoop-2.0.1.BUILD-20140529.094752-1.jar;

Then enable http on elasticsearch node (it's only one in my case) and start working again like a charm!

Regards,
Carmelo

@costin

This comment has been minimized.

Copy link
Member Author

costin commented May 30, 2014

Glad to hear it works. By the way, with the latest snapshot you can still have nodes with http disabled - notice however that they will not be used by es-hadoop.

@marcelopaesrech

This comment has been minimized.

Copy link

marcelopaesrech commented Jun 18, 2015

Hi Costin, I'm having this problem in the 2.1.0-BUILD 444 and the 2.1.0 RC1. My cluster has 2 nodes with no data enable but the http enabled and other two with the inverse configuration.
I did a debug here and in the end just the nodes with shards were selected to query ES resulting in a NullPointerException (nodeIp= null, nodePort = 0). The error follows:
java.lang.NullPointerException
at java.io.DataOutputStream.writeUTF(DataOutputStream.java:347)
at java.io.DataOutputStream.writeUTF(DataOutputStream.java:323)
at org.elasticsearch.hadoop.mr.EsInputFormat$ShardInputSplit.write(EsInputFormat.java:108)
at org.elasticsearch.hadoop.hive.EsHiveInputFormat$EsHiveSplit.write(EsHiveInputFormat.java:77)
at org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.write(HiveInputFormat.java:177)
at org.apache.hadoop.mapreduce.split.JobSplitWriter.writeOldSplits(JobSplitWriter.java:164)
at org.apache.hadoop.mapreduce.split.JobSplitWriter.createSplitFiles(JobSplitWriter.java:92)
at org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:353)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:323)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:199)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
...
Regards.

@costin

This comment has been minimized.

Copy link
Member Author

costin commented Jun 18, 2015

@marcelopaesrech This is a closed issue; please open a new one with more information about your environment and in particular your Hive script/configuration and the connector logs (turn on logging to TRACE level). Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.