Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NetworkClient: Invalid target URI POST@null/pharmadata/test/_search #748

Closed
anand-singh opened this issue Apr 15, 2016 · 2 comments
Closed

Comments

@anand-singh
Copy link

anand-singh commented Apr 15, 2016

"org.elasticsearch" % "elasticsearch-spark_2.11" % "2.3.0"

2016-04-15 12:50:20 +0530 [ERROR] from org.elasticsearch.hadoop.rest.NetworkClient in Executor task launch worker-0 - Node [host:port] failed (Invalid target URI POST@null/pharmadata/test/_search?search_type=scan&scroll=5m&size=50&_source=Case ID&preference=_shards:0;_only_node:PWFN3SkoRKee0LyYQvfvRQ); no other nodes left - aborting...
2016-04-15 12:50:20 +0530 [ERROR] from org.elasticsearch.hadoop.rest.NetworkClient in Executor task launch worker-0 - Node [host:port] failed (Invalid target URI POST@null/pharmadata/test/_search?search_type=scan&scroll=5m&size=50&_source=Case ID&preference=_shards:0;_only_node:PWFN3SkoRKee0LyYQvfvRQ); no other nodes left - aborting...
2016-04-15 12:50:20 +0530 [ERROR] from org.apache.spark.TaskContextImpl in Executor task launch worker-0 - Error in TaskCompletionListener
org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[host:port]] 
    at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:142) ~[elasticsearch-spark_2.11-2.3.0.jar:2.3.0]
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:434) ~[elasticsearch-spark_2.11-2.3.0.jar:2.3.0]
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:426) ~[elasticsearch-spark_2.11-2.3.0.jar:2.3.0]
    at org.elasticsearch.hadoop.rest.RestClient.scan(RestClient.java:483) ~[elasticsearch-spark_2.11-2.3.0.jar:2.3.0]
    at org.elasticsearch.hadoop.rest.RestRepository.scanLimit(RestRepository.java:144) ~[elasticsearch-spark_2.11-2.3.0.jar:2.3.0]
    at org.elasticsearch.hadoop.rest.QueryBuilder.build(QueryBuilder.java:181) ~[elasticsearch-spark_2.11-2.3.0.jar:2.3.0]
    at org.elasticsearch.hadoop.rest.RestService$PartitionReader.scrollQuery(RestService.java:104) ~[elasticsearch-spark_2.11-2.3.0.jar:2.3.0]
    at org.elasticsearch.spark.rdd.AbstractEsRDDIterator.reader$lzycompute(AbstractEsRDDIterator.scala:32) ~[elasticsearch-spark_2.11-2.3.0.jar:2.3.0]
    at org.elasticsearch.spark.rdd.AbstractEsRDDIterator.reader(AbstractEsRDDIterator.scala:24) ~[elasticsearch-spark_2.11-2.3.0.jar:2.3.0]
    at org.elasticsearch.spark.rdd.AbstractEsRDDIterator.close(AbstractEsRDDIterator.scala:63) ~[elasticsearch-spark_2.11-2.3.0.jar:2.3.0]
    at org.elasticsearch.spark.rdd.AbstractEsRDDIterator.closeIfNeeded(AbstractEsRDDIterator.scala:56) ~[elasticsearch-spark_2.11-2.3.0.jar:2.3.0]
    at org.elasticsearch.spark.rdd.AbstractEsRDDIterator$$anonfun$1.apply$mcV$sp(AbstractEsRDDIterator.scala:36) ~[elasticsearch-spark_2.11-2.3.0.jar:2.3.0]
    at org.elasticsearch.spark.rdd.AbstractEsRDDIterator$$anonfun$1.apply(AbstractEsRDDIterator.scala:36) ~[elasticsearch-spark_2.11-2.3.0.jar:2.3.0]
    at org.elasticsearch.spark.rdd.AbstractEsRDDIterator$$anonfun$1.apply(AbstractEsRDDIterator.scala:36) ~[elasticsearch-spark_2.11-2.3.0.jar:2.3.0]
    at org.elasticsearch.spark.rdd.CompatUtils$Spark11TaskContext$1.onTaskCompletion(CompatUtils.java:103) ~[elasticsearch-spark_2.11-2.3.0.jar:2.3.0]
    at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:79) [spark-core_2.11-1.6.1.jar:1.6.1]
    at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:77) [spark-core_2.11-1.6.1.jar:1.6.1]
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) [scala-library-2.11.7.jar:0.13.8]
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) [scala-library-2.11.7.jar:0.13.8]
    at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:77) [spark-core_2.11-1.6.1.jar:1.6.1]
    at org.apache.spark.scheduler.Task.run(Task.scala:91) [spark-core_2.11-1.6.1.jar:1.6.1]
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) [spark-core_2.11-1.6.1.jar:1.6.1]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_77]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_77]
    at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
2016-04-15 12:50:20 +0530 [ERROR] from org.apache.spark.executor.Executor in Executor task launch worker-0 - Exception in task 0.0 in stage 0.0 (TID 0)
org.apache.spark.util.TaskCompletionListenerException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[host:port]] 
    at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:87) ~[spark-core_2.11-1.6.1.jar:1.6.1]
    at org.apache.spark.scheduler.Task.run(Task.scala:91) ~[spark-core_2.11-1.6.1.jar:1.6.1]
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) ~[spark-core_2.11-1.6.1.jar:1.6.1]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_77]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_77]
    at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
2016-04-15 12:50:20 +0530 [WARN] from org.apache.spark.scheduler.TaskSetManager in task-result-getter-0 - Lost task 0.0 in stage 0.0 (TID 0, localhost): org.apache.spark.util.TaskCompletionListenerException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[host:port]] 
    at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:91)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

2016-04-15 12:50:20 +0530 [ERROR] from org.apache.spark.scheduler.TaskSetManager in task-result-getter-0 - Task 0 in stage 0.0 failed 1 times; aborting job
[error] (run-main-0) org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): org.apache.spark.util.TaskCompletionListenerException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[host:port]] 
[error]     at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:87)
[error]     at org.apache.spark.scheduler.Task.run(Task.scala:91)
[error]     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
[error]     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[error]     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[error]     at java.lang.Thread.run(Thread.java:745)
[error] 
[error] Driver stacktrace:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): org.apache.spark.util.TaskCompletionListenerException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[host:port]] 
    at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:91)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:212)
    at org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:165)
    at org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:174)
    at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
    at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
    at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2086)
    at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$execute$1(DataFrame.scala:1498)
    at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$collect(DataFrame.scala:1505)
    at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1375)
    at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1374)
    at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2099)
    at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1374)
    at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1456)
    at org.apache.spark.sql.DataFrame.showString(DataFrame.scala:170)
    at org.apache.spark.sql.DataFrame.show(DataFrame.scala:350)
    at org.apache.spark.sql.DataFrame.show(DataFrame.scala:311)
    at org.apache.spark.sql.DataFrame.show(DataFrame.scala:319)
    at com.rklick.engine.example.ESDataTest$.main(ESDataTest.scala:44)
    at com.rklick.engine.example.ESDataTest.main(ESDataTest.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
Caused by: org.apache.spark.util.TaskCompletionListenerException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[host:port]] 
    at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:91)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

@costin
Copy link
Member

costin commented Apr 15, 2016

@anand-singh Thank you for the report but without any information on what causes the exception, the report is not really useful.
There is a reason why we provide a template for when issues are raised as mentioned here

@costin
Copy link
Member

costin commented May 2, 2016

Since there hasn't been any update on this issue, I'm closing it down. If the issue still persists, please open a new one (and potentially link this as well).

Cheers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants