Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to use S3 File as mainApplicationFile #996

Closed
batCoder95 opened this issue Aug 11, 2020 · 5 comments
Closed

Unable to use S3 File as mainApplicationFile #996

batCoder95 opened this issue Aug 11, 2020 · 5 comments

Comments

@batCoder95
Copy link

Hi,

I am trying to install this operator (gcr.io/spark-operator/spark-py:v3.0.0) on my EKS cluster and then run a simple pyspark file that resides on my S3 bucket. I went through some of the documentation and found out that we need to set spark configurations in the YAML file in order to enable S3 file system. Therefore, I configured the same and below is how my YAML file looks like now:

apiVersion: "sparkoperator.k8s.io/v1beta2" kind: SparkApplication metadata: name: pyspark-pi namespace: default spec: type: Python pythonVersion: "3" mode: cluster image: "gcr.io/spark-operator/spark-py:v3.0.0" imagePullPolicy: Always mainApplicationFile: s3a://myBucket/input/appFile.py sparkVersion: "3.0.0" sparkConf: "spark.jars.packages": "com.amazonaws:aws-java-sdk-pom:1.11.271,org.apache.hadoop:hadoop-aws:3.1.0" "spark.hadoop.fs.s3a.impl": "org.apache.hadoop.fs.s3a.S3AFileSystem" "spark.hadoop.fs.s3a.access.key": "<access-key>" "spark.hadoop.fs.s3a.secret.key": "<secret-key>"

But, now that I am deploying this YAML file, I am running into following issue in the driver pod:

Exception in thread "main" java.io.FileNotFoundException: /opt/spark/.ivy2/cache/resolved-org.apache.spark-spark-submit-parent-d3d506ae-d79f-45f6-b459-cfa5dc649610-1.0.xml (No such file or directory) at java.io.FileOutputStream.open0(Native Method) at java.io.FileOutputStream.open(FileOutputStream.java:270) at java.io.FileOutputStream.<init>(FileOutputStream.java:213) at java.io.FileOutputStream.<init>(FileOutputStream.java:162) at org.apache.ivy.plugins.parser.xml.XmlModuleDescriptorWriter.write(XmlModuleDescriptorWriter.java:70) at org.apache.ivy.plugins.parser.xml.XmlModuleDescriptorWriter.write(XmlModuleDescriptorWriter.java:62) at org.apache.ivy.core.module.descriptor.DefaultModuleDescriptor.toIvyFile(DefaultModuleDescriptor.java:563) at org.apache.ivy.core.cache.DefaultResolutionCacheManager.saveResolvedModuleDescriptor(DefaultResolutionCacheManager.java:176) at org.apache.ivy.core.resolve.ResolveEngine.resolve(ResolveEngine.java:245) at org.apache.ivy.Ivy.resolve(Ivy.java:523) at org.apache.spark.deploy.SparkSubmitUtils$.resolveMavenCoordinates(SparkSubmit.scala:1387) at org.apache.spark.deploy.DependencyUtils$.resolveMavenDependencies(DependencyUtils.scala:54) at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:308) at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:871) at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90) at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Would really appreciate if someone can suggest me how to fix this issue?

Thanks in advance :)

@batCoder95
Copy link
Author

Hi all,

I was able to get this working by adding an extra sparkConf: "spark.driver.extraJavaOptions": "-Divy.cache.dir=/tmp -Divy.home=/tmp"

Now my pySpark file is being submitted. But in the pySpark file, I am reading data from S3 and writing it back to S3 at a different path. That part is again failing with error class org.apache.hadoop.fs.s3a.s3afilesystem not found.

In the PySpark file, I am setting following configuration before reading the data:

spark = SparkSession.builder.appName("abcd").config("spark.hadoop.fs.s3a.aws.credentials.provider","com.amazonaws.auth.InstanceProfileCredentialsProvider").config("spark.hadoop.fs.s3a.path.style.access","true").getOrCreate()

spark._jsc.hadoopConfiguration().set("com.amazonaws.services.s3.enableV4", "true")
spark._jsc.hadoopConfiguration().set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
spark._jsc.hadoopConfiguration().set("fs.s3a.aws.credentials.provider", "com.amazonaws.auth.InstanceProfileCredentialsProvider,com.amazonaws.auth.DefaultAWSCredentialsProviderChain")
spark._jsc.hadoopConfiguration().set("fs.AbstractFileSystem.s3a.impl", "org.apache.hadoop.fs.s3a.S3A")

Can somebody please suggest me if I'm missing something? Appreciate all the help :)

@batCoder95
Copy link
Author

Closing this as the issue is solved

@dukkune1
Copy link

dukkune1 commented Oct 3, 2020

@batCoder95 how did you get this issue solved? I am facing the same problem. Can you post the final YAML settings you used?

@zhaohc10
Copy link

zhaohc10 commented Oct 8, 2020

I face the same issue, very appreciate if you can share your yaml settings

@vvavepacket
Copy link

@dukkune1 @zhaohc10

add these to your yaml

  sparkConf:
    spark.driver.extraJavaOptions: "-Divy.cache.dir=/tmp -Divy.home=/tmp"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants