Compiling 10 source files to /Users/eddie/Projects/Software/geni/target/classes Note: /Users/eddie/Projects/Software/geni/src/java/geni/rdd/function/PairFn.java uses unchecked or unsafe operations. Note: Recompile with -Xlint:unchecked for details. Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 20/09/21 00:34:36 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 20/09/21 00:34:37 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041. 20/09/21 00:34:48 WARN JdbcUtils: Requested isolation level 1 is not supported; falling back to default isolation level 8 20/09/21 00:34:52 WARN LibSVMFileFormat: 'numFeatures' option not specified, determining the number of features by going though the input. If you know the number in advance, please specify it via 'numFeatures' option to avoid the extra scan. 20/09/21 00:34:52 WARN JdbcUtils: Requested isolation level 1 is not supported; falling back to default isolation level 8 WARNING: cat already refers to: #'clojure.core/cat in namespace: net.cgrand.parsley.fold, being replaced by: #'net.cgrand.parsley.fold/cat geni-repl (user) λ exit geni-repl (user) λ exit 20/09/21 00:34:57 WARN BLAS: Failed to load implementation from: com.github.fommil.netlib.NativeSystemBLAS 20/09/21 00:34:57 WARN BLAS: Failed to load implementation from: com.github.fommil.netlib.NativeRefBLAS 20/09/21 00:34:58 WARN LAPACK: Failed to load implementation from: com.github.fommil.netlib.NativeSystemLAPACK 20/09/21 00:34:58 WARN LAPACK: Failed to load implementation from: com.github.fommil.netlib.NativeRefLAPACK 20/09/21 00:35:07 WARN TaskSetManager: Stage 326 contains a task of very large size (2393 KiB). The maximum recommended task size is 1000 KiB. 20/09/21 00:35:07 WARN TaskSetManager: Stage 328 contains a task of very large size (2393 KiB). The maximum recommended task size is 1000 KiB. 20/09/21 00:35:07 WARN DAGScheduler: Broadcasting large task binary with size 9.3 MiB 20/09/21 00:35:08 WARN DAGScheduler: Broadcasting large task binary with size 9.3 MiB 20/09/21 00:35:09 WARN PrefixSpan: Input data is not cached. 20/09/21 00:35:10 WARN FPGrowth: Input data is not cached. FAIL On JavaSparkContext methods - expected static fields at (rdd_test.clj:56) Actual result did not agree with the checking function. Actual result: "/Applications/spark/spark-3.0.0-bin-hadoop2.7" Checking function: nil? 20/09/21 00:36:12 ERROR Executor: Exception in task 1.0 in stage 492.0 (TID 1407) java.io.FileNotFoundException: /var/folders/zl/hlg5bvnj1y9753gvjsf0_86r0000gn/T/torch-shm-file-R1D8c (Operation not supported on socket) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(FileInputStream.java:195) at java.io.FileInputStream.(FileInputStream.java:138) at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileInputStream.(RawLocalFileSystem.java:106) at org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:202) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:143) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:346) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:769) at org.apache.spark.input.WholeTextFileRecordReader.nextKeyValue(WholeTextFileRecordReader.scala:75) at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.nextKeyValue(CombineFileRecordReader.java:65) at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:247) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804) at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227) at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227) at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2139) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:127) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 20/09/21 00:36:12 WARN TaskSetManager: Lost task 1.0 in stage 492.0 (TID 1407, 192.168.1.10, executor driver): java.io.FileNotFoundException: /var/folders/zl/hlg5bvnj1y9753gvjsf0_86r0000gn/T/torch-shm-file-R1D8c (Operation not supported on socket) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(FileInputStream.java:195) at java.io.FileInputStream.(FileInputStream.java:138) at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileInputStream.(RawLocalFileSystem.java:106) at org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:202) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:143) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:346) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:769) at org.apache.spark.input.WholeTextFileRecordReader.nextKeyValue(WholeTextFileRecordReader.scala:75) at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.nextKeyValue(CombineFileRecordReader.java:65) at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:247) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804) at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227) at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227) at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2139) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:127) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 20/09/21 00:36:12 ERROR TaskSetManager: Task 1 in stage 492.0 failed 1 times; aborting job 20/09/21 00:36:12 WARN TaskSetManager: Lost task 0.0 in stage 492.0 (TID 1406, 192.168.1.10, executor driver): TaskKilled (Stage cancelled) FAIL On basic RDD saving and loading - save-as-text-file works at (rdd_test.clj:250) The checking function `pos?` threw the exception: #<java.lang.ClassCastException@57e89125 java.lang.ClassCastException: midje.util.exceptions.CapturedThrowable cannot be cast to java.lang.Number> When checked against the actual result: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 492.0 failed 1 times, most recent failure: Lost task 1.0 in stage 492.0 (TID 1407, 192.168.1.10, executor driver): java.io.FileNotFoundException: /var/folders/zl/hlg5bvnj1y9753gvjsf0_86r0000gn/T/torch-shm-file-R1D8c (Operation not supported on socket) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(FileInputStream.java:195) at java.io.FileInputStream.(FileInputStream.java:138) at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileInputStream.(RawLocalFileSystem.java:106) at org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:202) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:143) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:346) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:769) at org.apache.spark.input.WholeTextFileRecordReader.nextKeyValue(WholeTextFileRecordReader.scala:75) at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.nextKeyValue(CombineFileRecordReader.java:65) at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:247) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804) at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227) at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227) at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2139) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:127) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Driver stacktrace: org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2059) org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2008) org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2007) scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2007) org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:973) org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:973) scala.Option.foreach(Option.scala:407) org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:973) org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2239) org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2188) org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2177) org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:775) org.apache.spark.SparkContext.runJob(SparkContext.scala:2099) org.apache.spark.SparkContext.runJob(SparkContext.scala:2120) org.apache.spark.SparkContext.runJob(SparkContext.scala:2139) org.apache.spark.SparkContext.runJob(SparkContext.scala:2164) org.apache.spark.rdd.RDD.count(RDD.scala:1227) org.apache.spark.api.java.JavaRDDLike.count(JavaRDDLike.scala:455) org.apache.spark.api.java.JavaRDDLike.count$(JavaRDDLike.scala:455) org.apache.spark.api.java.AbstractJavaRDDLike.count(JavaRDDLike.scala:45) zero_one.geni.rdd$count.invokeStatic(rdd.clj:278) zero_one.geni.rdd$count.invoke(rdd.clj:277) zero_one.geni.rdd_test$eval14403$fn__14404$fn__14405$fn__14414$fn__14415$fn__14420.invoke(rdd_test.clj:250) zero_one.geni.rdd_test$eval14403$fn__14404$fn__14405$fn__14414$fn__14415.invoke(rdd_test.clj:234) zero_one.geni.rdd_test$eval14403$fn__14404$fn__14405$fn__14414.invoke(rdd_test.clj:234) zero_one.geni.rdd_test$eval14403$fn__14404$fn__14405.invoke(rdd_test.clj:234) zero_one.geni.rdd_test$eval14403$fn__14404.invoke(rdd_test.clj:234) zero_one.geni.rdd_test$eval14403.invokeStatic(rdd_test.clj:234) zero_one.geni.rdd_test$eval14403.invoke(rdd_test.clj:234) Caused by: java.io.FileNotFoundException: /var/folders/zl/hlg5bvnj1y9753gvjsf0_86r0000gn/T/torch-shm-file-R1D8c (Operation not supported on socket) org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileInputStream.(RawLocalFileSystem.java:106) org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:202) org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:143) org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:346) org.apache.hadoop.fs.FileSystem.open(FileSystem.java:769) org.apache.spark.input.WholeTextFileRecordReader.nextKeyValue(WholeTextFileRecordReader.scala:75) org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.nextKeyValue(CombineFileRecordReader.java:65) org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:247) org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804) org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227) org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227) org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2139) org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) org.apache.spark.scheduler.Task.run(Task.scala:127) org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446) org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377) org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449) 20/09/21 00:37:10 ERROR Executor: Exception in task 1.0 in stage 493.0 (TID 1409) java.io.FileNotFoundException: /var/folders/zl/hlg5bvnj1y9753gvjsf0_86r0000gn/T/torch-shm-file-R1D8c (Operation not supported on socket) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(FileInputStream.java:195) at java.io.FileInputStream.(FileInputStream.java:138) at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileInputStream.(RawLocalFileSystem.java:106) at org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:202) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:143) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:346) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:769) at org.apache.spark.input.WholeTextFileRecordReader.nextKeyValue(WholeTextFileRecordReader.scala:75) at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.nextKeyValue(CombineFileRecordReader.java:65) at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:247) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804) at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227) at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227) at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2139) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:127) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 20/09/21 00:37:10 WARN TaskSetManager: Lost task 1.0 in stage 493.0 (TID 1409, 192.168.1.10, executor driver): java.io.FileNotFoundException: /var/folders/zl/hlg5bvnj1y9753gvjsf0_86r0000gn/T/torch-shm-file-R1D8c (Operation not supported on socket) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(FileInputStream.java:195) at java.io.FileInputStream.(FileInputStream.java:138) at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileInputStream.(RawLocalFileSystem.java:106) at org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:202) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:143) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:346) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:769) at org.apache.spark.input.WholeTextFileRecordReader.nextKeyValue(WholeTextFileRecordReader.scala:75) at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.nextKeyValue(CombineFileRecordReader.java:65) at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:247) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804) at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227) at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227) at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2139) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:127) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 20/09/21 00:37:10 ERROR TaskSetManager: Task 1 in stage 493.0 failed 1 times; aborting job FAIL On basic RDD saving and loading - save-as-text-file works at (rdd_test.clj:251) The checking function `(fn* [p1__14402#] (< 1 p1__14402#))` threw the exception: #<java.lang.ClassCastException@13305ac2 java.lang.ClassCastException: midje.util.exceptions.CapturedThrowable cannot be cast to java.lang.Number> When checked against the actual result: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 493.0 failed 1 times, most recent failure: Lost task 1.0 in stage 493.0 (TID 1409, 192.168.1.10, executor driver): java.io.FileNotFoundException: /var/folders/zl/hlg5bvnj1y9753gvjsf0_86r0000gn/T/torch-shm-file-R1D8c (Operation not supported on socket) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(FileInputStream.java:195) at java.io.FileInputStream.(FileInputStream.java:138) at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileInputStream.(RawLocalFileSystem.java:106) at org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:202) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:143) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:346) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:769) at org.apache.spark.input.WholeTextFileRecordReader.nextKeyValue(WholeTextFileRecordReader.scala:75) at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.nextKeyValue(CombineFileRecordReader.java:65) at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:247) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804) at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227) at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227) at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2139) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:127) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Driver stacktrace: org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2059) org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2008) org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2007) scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2007) org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:973) org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:973) scala.Option.foreach(Option.scala:407) org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:973) org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2239) org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2188) org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2177) org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:775) org.apache.spark.SparkContext.runJob(SparkContext.scala:2099) org.apache.spark.SparkContext.runJob(SparkContext.scala:2120) org.apache.spark.SparkContext.runJob(SparkContext.scala:2139) org.apache.spark.SparkContext.runJob(SparkContext.scala:2164) org.apache.spark.rdd.RDD.count(RDD.scala:1227) org.apache.spark.api.java.JavaRDDLike.count(JavaRDDLike.scala:455) org.apache.spark.api.java.JavaRDDLike.count$(JavaRDDLike.scala:455) org.apache.spark.api.java.AbstractJavaRDDLike.count(JavaRDDLike.scala:45) zero_one.geni.rdd$count.invokeStatic(rdd.clj:278) zero_one.geni.rdd$count.invoke(rdd.clj:277) zero_one.geni.rdd_test$eval14403$fn__14404$fn__14405$fn__14414$fn__14415$fn__14422.invoke(rdd_test.clj:251) zero_one.geni.rdd_test$eval14403$fn__14404$fn__14405$fn__14414$fn__14415.invoke(rdd_test.clj:234) zero_one.geni.rdd_test$eval14403$fn__14404$fn__14405$fn__14414.invoke(rdd_test.clj:234) zero_one.geni.rdd_test$eval14403$fn__14404$fn__14405.invoke(rdd_test.clj:234) zero_one.geni.rdd_test$eval14403$fn__14404.invoke(rdd_test.clj:234) zero_one.geni.rdd_test$eval14403.invokeStatic(rdd_test.clj:234) zero_one.geni.rdd_test$eval14403.invoke(rdd_test.clj:234) Caused by: java.io.FileNotFoundException: /var/folders/zl/hlg5bvnj1y9753gvjsf0_86r0000gn/T/torch-shm-file-R1D8c (Operation not supported on socket) org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileInputStream.(RawLocalFileSystem.java:106) org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:202) org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:143) org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:346) org.apache.hadoop.fs.FileSystem.open(FileSystem.java:769) org.apache.spark.input.WholeTextFileRecordReader.nextKeyValue(WholeTextFileRecordReader.scala:75) org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.nextKeyValue(CombineFileRecordReader.java:65) org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:247) org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1804) org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1227) org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1227) org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2139) org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) org.apache.spark.scheduler.Task.run(Task.scala:127) org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446) org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377) org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449) 20/09/21 00:37:10 WARN TaskSetManager: Lost task 0.0 in stage 493.0 (TID 1408, 192.168.1.10, executor driver): TaskKilled (Stage cancelled) 20/09/21 00:37:13 WARN package: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.sql.debug.maxToStringFields'. 20/09/21 00:37:13 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 20/09/21 00:37:13 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 20/09/21 00:37:13 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 20/09/21 00:37:13 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 20/09/21 00:37:14 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 20/09/21 00:37:14 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 20/09/21 00:37:14 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 20/09/21 00:37:14 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation. 20/09/21 00:37:15 WARN Column: Constructing trivially true equals predicate, '1 <=> 1'. Perhaps you need to use aliases. 20/09/21 00:37:16 WARN Column: Constructing trivially true equals predicate, '1 = 1'. Perhaps you need to use aliases. FAIL On time functions - correct time arithmetic at (sql_functions_test.clj:619) Actual result did not agree with the checking function. Actual result: "1969-12-31 19:00:01" Checking function: (fn* [p1__15778#] (.contains p1__15778# "1970-01-01 ")) FAIL On time functions - correct time arithmetic at (sql_functions_test.clj:623) Actual result did not agree with the checking function. Actual result: "1969/12/31 19:00" Checking function: (fn* [p1__15779#] (.contains p1__15779# "1970/01/1 ")) 20/09/21 00:37:34 WARN HintErrorLogger: Unrecognized hint: myHint(100, true) 20/09/21 00:37:40 WARN CacheManager: Asked to cache already cached data. 20/09/21 00:37:40 WARN CacheManager: Asked to cache already cached data. 20/09/21 00:37:40 WARN CacheManager: Asked to cache already cached data. 20/09/21 00:37:40 WARN RandomBlockReplicationPolicy: Expecting 1 replicas with only 0 peer/s. 20/09/21 00:37:40 WARN BlockManager: Block rdd_3477_0 replicated to only 0 peer(s) instead of 1 peers nil FAILURE: 5 checks failed. (But 823 succeeded.) Subprocess failed (exit code: 5)