Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add friendlier diagnostics to EsHadoopInvalidRequest #217

Closed
costin opened this issue Jun 17, 2014 · 1 comment
Closed

Add friendlier diagnostics to EsHadoopInvalidRequest #217

costin opened this issue Jun 17, 2014 · 1 comment

Comments

@costin
Copy link
Member

costin commented Jun 17, 2014

When dealing with an invalid request (typically because of invalid data, the incorrect snippet is useful to be 'decrypted' and showed to the user).

@bobrik
Copy link

bobrik commented Oct 26, 2014

14/10/26 12:27:35 WARN scheduler.TaskSetManager: Lost task 3.0 in stage 0.0 (TID 8, web169): org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: JsonParseException[Unexpected character ('+' (code 43)): was expecting comma to separate OBJECT entries
 at [Source: [B@2ab220; line: 1, column: 112]]; fragment[4-10-02-unique_rotat]
        org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:322)
        org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:299)
        org.elasticsearch.hadoop.rest.RestClient.bulk(RestClient.java:149)
        org.elasticsearch.hadoop.rest.RestRepository.tryFlush(RestRepository.java:199)
        org.elasticsearch.hadoop.rest.RestRepository.flush(RestRepository.java:223)
        org.elasticsearch.hadoop.rest.RestRepository.doWriteToIndex(RestRepository.java:175)
        org.elasticsearch.hadoop.rest.RestRepository.writeToIndex(RestRepository.java:138)
        org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:36)
        org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:34)
        org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:34)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
        org.apache.spark.scheduler.Task.run(Task.scala:54)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
        java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        java.lang.Thread.run(Thread.java:745)

Bigger fragments could help, not sure how to figure out what is wrong with current short ones.

@costin mind to make fragments bigger?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants