New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exception in SparkSQL when es.read.metadata=true #408
Comments
I'm not sure what causes the exception so that one looks like a bug. As for the mailing list, you can find it in the resource page in the docs; I'll update the project README so it can be found easily. |
Thanks for your response. You mentioned that the document id is read anyway since the returned RDD is a PairRDD, but the result of SELECT statement is SchemaRDD[Row]. Can you please clarify about the PairRDD? Thanks, |
Hi, Fixed this in master for Spark 1.3.Spark 1.2 will follow shortly. |
Addressed 1.2 as well. Just published a nightly build - please try them out. |
Marking this is a close. Please open a new issue if the problem persists. |
"java.util.NoSuchElementException: key not found: 1",this problem still occurs in spark 1.5.1. . I encountered it when using graph. |
Hi,
elasticsearch-hadoop 1.2.0.BUILD_SNAPSHOT
spark 1.2.0
I would like to include documents' metadata (more specifically the _id field) in Spark SQL Select query. I didn't find in elasticsearch documentation how exactly to do that when using Spark SQL, but I've read that in order to include documents metadata fields in the response, one needs to set es.read.metadata=true. After doing that, the following Exception is thrown:
The code:
Can you please advise if it is a bug or am I going in a wrong way to get the documents' metadata fields in the Spark SQL select query's output? What is the correct way to do that?
Also, can you please advise where to find the Mailing List for asking questions?
Thanks,
Dmitriy Fingerman
The text was updated successfully, but these errors were encountered: