-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spark not able to query S3 after adding the jar spark-3.3-spline-agent-bundle_2.12-1.0.1.jar #592
Comments
What are the versions of Spark, Scala and Java that you are using? |
spark 3.3.1 |
Could you try if the issue happens on Java 11 and Java 17? |
It looks like a common issue with modern JVMs.
|
Got new exception
|
Getting the same exception with this jvm options --add-opens=java.base/sun.net.www.protocol.jar=ALL-UNNAMED |
This was fixed in #579 |
@wajda Thanks for the information. It's working |
I am using a single node spark to test with spline.
Spline servers are running on docker based on https://absaoss.github.io/spline/#step-by-step
SparkConfig
spark-defaults.conf =>
spark.sql.queryExecutionListeners za.co.absa.spline.harvester.listener.SplineQueryExecutionListener
spark.spline.producer.url http://spline-server:9090/producer
Added spark-3.3-spline-agent-bundle_2.12-1.0.1.jar to the $SPARK_HOME/jars path
Execute =>
val df = spark.read.format("csv").option("inferSchema", "True").option("header", "True").option("sep", ",").load("s3a://{Bucket}/{file_name}.csv")
spark is throwing exception
The text was updated successfully, but these errors were encountered: