Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Job aborted due to stage failure #11

Closed
witwall opened this issue Jul 25, 2016 · 3 comments
Closed

Job aborted due to stage failure #11

witwall opened this issue Jul 25, 2016 · 3 comments

Comments

@witwall
Copy link

witwall commented Jul 25, 2016

I tried load random.sas7bdat(from the resource) with both scala and sparkr, got same error when I tried to print the row acount of the df(df.count for scala and nrow(df) for sparkr)

the version of spark is 1.6.2

> sqlContext <- sparkRSQL.init(sc)
> df <- loadDF(sqlContext,"e:/temp/random.sas7bdat", "com.github.saurfang.sas.spark")
> cache(df)
DataFrame[x:double, f:double]
> printSchema(df)
root
 |-- x: double (nullable = true)
 |-- f: double (nullable = true)
> nrow(df)
Error in invokeJava(isStatic = FALSE, objId$id, methodName, ...) : 
  org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.NullPointerException
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1012)
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:482)
    at org.apache.hadoop.util.Shell.run(Shell.java:455)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
    at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:873)
    at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:853)
    at org.apache.spark.util.Utils$.fetchFile(Utils.scala:407)
    at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:430)
    at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:422)
    at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLi
@saurfang
Copy link
Owner

I could be wrong but from the limited stacktrace you were able to provide I am not sure if it is with this library. Have you been able to run any SparkSQL example in SparkR already? (i.e. a SparkR command that actually triggers SparkSQL execution)

@witwall
Copy link
Author

witwall commented Jul 27, 2016

it works now, but I got another issue,

Sys.setenv(SPARK_HOME="E:/jobs/spark/spark")
.libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths()))
library(SparkR)
packages=c("saurfang:spark-sas7bdat:1.1.4-s_2.10","com.databricks:spark-csv_2.10:1.4.0")
sc <- sparkR.init(master="local[*]",sparkPackages=packages)
sqlContext <- sparkRSQL.init(sc)
df <- loadDF(sqlContext,"Y:/of_trans.sas7bdat", "com.github.saurfang.sas.spark")
registerTempTable(df,"trans")
cnt=sql(sqlContext,"select count(1) from trans")
collect(cnt)

if sas file is some bigger, for example 152M, it is pending when I run collect(cnt)(some times, the first time is ok, but the second time, it is pending)

no matter on a 8G, 16G or 128G memeory PC or server.
spark 1.6.2, and spark-sas7bdat is the last version.

I also tried spark-shell, the same result.

@witwall
Copy link
Author

witwall commented Jul 27, 2016

I notice maybe it is not the problem of your spark-sas7bdat, but spark, because I tried load a big table from postgresql, got the same issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants