Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] [Spark Translation] scala.collection.immutable.Map$Map1 cannot be cast to scala.collection.immutable.HashMap$HashTrieMap #6504

Closed
2 of 3 tasks
liunaijie opened this issue Mar 14, 2024 · 0 comments · Fixed by #6552
Labels

Comments

@liunaijie
Copy link
Contributor

Search before asking

  • I had searched in the issues and found no similar issues.

What happened

run e2e test with new feature, meet convert issue.
scala.collection.immutable.Map$Map1 cannot be cast to scala.collection.immutable.HashMap$HashTrieMap
And the data out type doesn't has map type.

SeaTunnel Version

dev

SeaTunnel Config

env {
  job.mode = "BATCH"
}

source {
  FakeSource {
    result_table_name = "fake"
    row.num = 100
    string.template = ["innerQuery"]
    schema = {
      fields {
        name = "string"
        c_date = "date"
        c_row = {
          c_inner_row = {
            c_inner_int = "int"
            c_inner_string = "string"
            c_inner_timestamp = "timestamp"
            c_map = "map<string, string>"
          }
          c_string = "string"
        }
      }
    }
  }
}

transform {
    Sql {
        source_table_name = "fake"
        result_table_name = "tmp1"
        query = """select c_date,
        c_row.c_string c_string,
        c_row.c_inner_row.c_inner_string c_inner_string,
        c_row.c_inner_row.c_inner_timestamp c_inner_timestamp,
        c_row.c_inner_row.c_map.innerQuery map_val,
        c_row.c_inner_row.c_map.notExistKey map_not_exist_val
        from fake"""
    }
}

sink {
  Console {
    source_table_name = "tmp1"
  }
  Assert {
    source_table_name = "tmp1"
    rules = {
      field_rules = [{
        field_name = "c_date"
        field_type = "date"
        field_value = [
            {rule_type = NOT_NULL}
          ]
        },
        {
          field_name = "c_string"
          field_type = "string"
          field_value = [
            {equals_to = "innerQuery"}
          ]
        },
        {
          field_name = "c_inner_string"
          field_type = "string"
          field_value = [
            {equals_to = "innerQuery"}
          ]
        },
        {
          field_name = "c_inner_timestamp"
          field_type = "timestamp"
          field_value = [
            {rule_type = NOT_NULL}
          ]
        },
        {
          field_name = "map_val"
          field_type = "string"
          field_value = [
            {rule_type = NOT_NULL}
          ]
        },
        {
          field_name = "map_not_exist_val"
          field_type = "null"
          field_value = [
            {rule_type = NULL}
          ]
        }
      ]
    }
  }
}


### Running Command

```shell
e2e

Error Exception

2024-03-12T00:52:17.8475040Z 2024-03-12 00:52:17,845 ERROR org.apache.seatunnel.e2e.common.container.AbstractTestContainer - Container[tyrantlucifer/spark:3.3.0] command /tmp/seatunnel/bin/start-seatunnel-spark-3-connector-v2.sh --config /tmp/sql_transform/inner_query.conf --master local --deploy-mode client STDERR:
2024-03-12T00:52:17.8476519Z ==================== STDERR start ====================
2024-03-12T00:52:17.8477148Z Warning: Ignoring non-Spark config property: job.mode
2024-03-12T00:52:17.8478832Z ##[error]Exception in thread "main" org.apache.seatunnel.core.starter.exception.CommandExecuteException: Run SeaTunnel on spark failed
2024-03-12T00:52:17.8483837Z 	at org.apache.seatunnel.core.starter.spark.command.SparkTaskExecuteCommand.execute(SparkTaskExecuteCommand.java:62)
2024-03-12T00:52:17.8484798Z 	at org.apache.seatunnel.core.starter.SeaTunnel.run(SeaTunnel.java:40)
2024-03-12T00:52:17.8485559Z 	at org.apache.seatunnel.core.starter.spark.SeaTunnelSpark.main(SeaTunnelSpark.java:35)
2024-03-12T00:52:17.8486278Z 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2024-03-12T00:52:17.8486953Z 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2024-03-12T00:52:17.8487753Z 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2024-03-12T00:52:17.8488410Z 	at java.lang.reflect.Method.invoke(Method.java:498)
2024-03-12T00:52:17.8489032Z 	at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
2024-03-12T00:52:17.8489898Z 	at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)
2024-03-12T00:52:17.8490725Z 	at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
2024-03-12T00:52:17.8491394Z 	at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
2024-03-12T00:52:17.8492044Z 	at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
2024-03-12T00:52:17.8492738Z 	at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
2024-03-12T00:52:17.8493649Z 	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
2024-03-12T00:52:17.8494269Z 	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
2024-03-12T00:52:17.8494858Z Caused by: org.apache.spark.SparkException: Writing job aborted
2024-03-12T00:52:17.8495662Z 	at org.apache.spark.sql.errors.QueryExecutionErrors$.writingJobAbortedError(QueryExecutionErrors.scala:749)
2024-03-12T00:52:17.8496729Z 	at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2(WriteToDataSourceV2Exec.scala:409)
2024-03-12T00:52:17.8497841Z 	at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2$(WriteToDataSourceV2Exec.scala:353)
2024-03-12T00:52:17.8499098Z 	at org.apache.spark.sql.execution.datasources.v2.AppendDataExec.writeWithV2(WriteToDataSourceV2Exec.scala:244)
2024-03-12T00:52:17.8500204Z 	at org.apache.spark.sql.execution.datasources.v2.V2ExistingTableWriteExec.run(WriteToDataSourceV2Exec.scala:332)
2024-03-12T00:52:17.8501315Z 	at org.apache.spark.sql.execution.datasources.v2.V2ExistingTableWriteExec.run$(WriteToDataSourceV2Exec.scala:331)
2024-03-12T00:52:17.8502368Z 	at org.apache.spark.sql.execution.datasources.v2.AppendDataExec.run(WriteToDataSourceV2Exec.scala:244)
2024-03-12T00:52:17.8503376Z 	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
2024-03-12T00:52:17.8504332Z 	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)
2024-03-12T00:52:17.8505293Z 	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)
2024-03-12T00:52:17.8506392Z 	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98)
2024-03-12T00:52:17.8507521Z 	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:109)
2024-03-12T00:52:17.8508412Z 	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:169)
2024-03-12T00:52:17.8509290Z 	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95)
2024-03-12T00:52:17.8510081Z 	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
2024-03-12T00:52:17.8510842Z 	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
2024-03-12T00:52:17.8511865Z 	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
2024-03-12T00:52:17.8513019Z 	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94)
2024-03-12T00:52:17.8514030Z 	at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584)
2024-03-12T00:52:17.8514897Z 	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176)
2024-03-12T00:52:17.8515749Z 	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584)
2024-03-12T00:52:17.8517007Z 	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
2024-03-12T00:52:17.8518354Z 	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
2024-03-12T00:52:17.8519511Z 	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
2024-03-12T00:52:17.8520640Z 	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
2024-03-12T00:52:17.8521716Z 	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
2024-03-12T00:52:17.8522636Z 	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:560)
2024-03-12T00:52:17.8523506Z 	at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94)
2024-03-12T00:52:17.8524464Z 	at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81)
2024-03-12T00:52:17.8525378Z 	at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:79)
2024-03-12T00:52:17.8526288Z 	at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:116)
2024-03-12T00:52:17.8527129Z 	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:860)
2024-03-12T00:52:17.8527872Z 	at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:311)
2024-03-12T00:52:17.8528662Z 	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:247)
2024-03-12T00:52:17.8529570Z 	at org.apache.seatunnel.core.starter.spark.execution.SinkExecuteProcessor.execute(SinkExecuteProcessor.java:155)
2024-03-12T00:52:17.8530637Z 	at org.apache.seatunnel.core.starter.spark.execution.SparkExecution.execute(SparkExecution.java:71)
2024-03-12T00:52:17.8531721Z 	at org.apache.seatunnel.core.starter.spark.command.SparkTaskExecuteCommand.execute(SparkTaskExecuteCommand.java:60)
2024-03-12T00:52:17.8532452Z 	... 14 more
2024-03-12T00:52:17.8535066Z Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0) (1dbc91db0016 executor driver): org.apache.seatunnel.core.starter.exception.TaskExecuteException: Row convert failed, caused: scala.collection.immutable.Map$Map1 cannot be cast to scala.collection.immutable.HashMap$HashTrieMap
2024-03-12T00:52:17.8537953Z 	at org.apache.seatunnel.core.starter.spark.execution.TransformExecuteProcessor$TransformIterator.next(TransformExecuteProcessor.java:191)
2024-03-12T00:52:17.8539415Z 	at org.apache.seatunnel.core.starter.spark.execution.TransformExecuteProcessor$TransformIterator.next(TransformExecuteProcessor.java:152)
2024-03-12T00:52:17.8540447Z 	at scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:46)
2024-03-12T00:52:17.8541096Z 	at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
2024-03-12T00:52:17.8541974Z 	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.processNext(Unknown Source)
2024-03-12T00:52:17.8543000Z 	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
2024-03-12T00:52:17.8543998Z 	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760)
2024-03-12T00:52:17.8545126Z 	at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$1(WriteToDataSourceV2Exec.scala:435)
2024-03-12T00:52:17.8546109Z 	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1538)
2024-03-12T00:52:17.8547053Z 	at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:480)
2024-03-12T00:52:17.8548165Z 	at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:381)
2024-03-12T00:52:17.8549070Z 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
2024-03-12T00:52:17.8549652Z 	at org.apache.spark.scheduler.Task.run(Task.scala:136)
2024-03-12T00:52:17.8550274Z 	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
2024-03-12T00:52:17.8550968Z 	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
2024-03-12T00:52:17.8551604Z 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
2024-03-12T00:52:17.8552332Z 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
2024-03-12T00:52:17.8553110Z 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
2024-03-12T00:52:17.8553680Z 	at java.lang.Thread.run(Thread.java:750)
2024-03-12T00:52:17.8554518Z Caused by: java.lang.ClassCastException: scala.collection.immutable.Map$Map1 cannot be cast to scala.collection.immutable.HashMap$HashTrieMap
2024-03-12T00:52:17.8555801Z 	at org.apache.seatunnel.translation.spark.serialization.SeaTunnelRowConverter.reconvert(SeaTunnelRowConverter.java:182)
2024-03-12T00:52:17.8557098Z 	at org.apache.seatunnel.translation.spark.serialization.SeaTunnelRowConverter.createFromGenericRow(SeaTunnelRowConverter.java:194)
2024-03-12T00:52:17.8558379Z 	at org.apache.seatunnel.translation.spark.serialization.SeaTunnelRowConverter.reconvert(SeaTunnelRowConverter.java:166)
2024-03-12T00:52:17.8559665Z 	at org.apache.seatunnel.translation.spark.serialization.SeaTunnelRowConverter.createFromGenericRow(SeaTunnelRowConverter.java:194)
2024-03-12T00:52:17.8561069Z 	at org.apache.seatunnel.translation.spark.serialization.SeaTunnelRowConverter.reconvert(SeaTunnelRowConverter.java:166)
2024-03-12T00:52:17.8562278Z 	at org.apache.seatunnel.translation.spark.serialization.SeaTunnelRowConverter.reconvert(SeaTunnelRowConverter.java:203)
2024-03-12T00:52:17.8563467Z 	at org.apache.seatunnel.translation.spark.serialization.SeaTunnelRowConverter.reconvert(SeaTunnelRowConverter.java:169)
2024-03-12T00:52:17.8564670Z 	at org.apache.seatunnel.translation.spark.serialization.SeaTunnelRowConverter.reconvert(SeaTunnelRowConverter.java:156)
2024-03-12T00:52:17.8565943Z 	at org.apache.seatunnel.core.starter.spark.execution.TransformExecuteProcessor$TransformIterator.next(TransformExecuteProcessor.java:182)
2024-03-12T00:52:17.8566771Z 	... 18 more
2024-03-12T00:52:17.8566913Z 
2024-03-12T00:52:17.8567018Z Driver stacktrace:
2024-03-12T00:52:17.8567585Z 	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2672)
2024-03-12T00:52:17.8568469Z 	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2608)
2024-03-12T00:52:17.8569650Z 	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2607)
2024-03-12T00:52:17.8570955Z 	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
2024-03-12T00:52:17.8572197Z 	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
2024-03-12T00:52:17.8573532Z 	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
2024-03-12T00:52:17.8574731Z 	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2607)
2024-03-12T00:52:17.8575857Z 	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1182)
2024-03-12T00:52:17.8577120Z 	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1182)
2024-03-12T00:52:17.8577959Z 	at scala.Option.foreach(Option.scala:407)
2024-03-12T00:52:17.8578693Z 	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1182)
2024-03-12T00:52:17.8579609Z 	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2860)
2024-03-12T00:52:17.8580535Z 	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2802)
2024-03-12T00:52:17.8581448Z 	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2791)
2024-03-12T00:52:17.8582303Z 	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
2024-03-12T00:52:17.8583425Z 	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:952)
2024-03-12T00:52:17.8584476Z 	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2228)
2024-03-12T00:52:17.8585942Z 	at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2(WriteToDataSourceV2Exec.scala:377)
2024-03-12T00:52:17.8587182Z 	... 49 more
2024-03-12T00:52:17.8588752Z Caused by: org.apache.seatunnel.core.starter.exception.TaskExecuteException: Row convert failed, caused: scala.collection.immutable.Map$Map1 cannot be cast to scala.collection.immutable.HashMap$HashTrieMap
2024-03-12T00:52:17.8590799Z 	at org.apache.seatunnel.core.starter.spark.execution.TransformExecuteProcessor$TransformIterator.next(TransformExecuteProcessor.java:191)
2024-03-12T00:52:17.8592347Z 	at org.apache.seatunnel.core.starter.spark.execution.TransformExecuteProcessor$TransformIterator.next(TransformExecuteProcessor.java:152)
2024-03-12T00:52:17.8593766Z 	at scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:46)
2024-03-12T00:52:17.8594641Z 	at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
2024-03-12T00:52:17.8596343Z 	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.processNext(Unknown Source)
2024-03-12T00:52:17.8597408Z 	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
2024-03-12T00:52:17.8598347Z 	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760)
2024-03-12T00:52:17.8600309Z 	at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$1(WriteToDataSourceV2Exec.scala:435)
2024-03-12T00:52:17.8602296Z 	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1538)
2024-03-12T00:52:17.8603397Z 	at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:480)
2024-03-12T00:52:17.8605222Z 	at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:381)
2024-03-12T00:52:17.8606741Z 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
2024-03-12T00:52:17.8607721Z 	at org.apache.spark.scheduler.Task.run(Task.scala:136)
2024-03-12T00:52:17.8608508Z 	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
2024-03-12T00:52:17.8609393Z 	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
2024-03-12T00:52:17.8610166Z 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
2024-03-12T00:52:17.8611078Z 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
2024-03-12T00:52:17.8611853Z 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
2024-03-12T00:52:17.8612420Z 	at java.lang.Thread.run(Thread.java:750)
2024-03-12T00:52:17.8614270Z Caused by: java.lang.ClassCastException: scala.collection.immutable.Map$Map1 cannot be cast to scala.collection.immutable.HashMap$HashTrieMap
2024-03-12T00:52:17.8615594Z 	at org.apache.seatunnel.translation.spark.serialization.SeaTunnelRowConverter.reconvert(SeaTunnelRowConverter.java:182)
2024-03-12T00:52:17.8617025Z 	at org.apache.seatunnel.translation.spark.serialization.SeaTunnelRowConverter.createFromGenericRow(SeaTunnelRowConverter.java:194)
2024-03-12T00:52:17.8618409Z 	at org.apache.seatunnel.translation.spark.serialization.SeaTunnelRowConverter.reconvert(SeaTunnelRowConverter.java:166)
2024-03-12T00:52:17.8619708Z 	at org.apache.seatunnel.translation.spark.serialization.SeaTunnelRowConverter.createFromGenericRow(SeaTunnelRowConverter.java:194)
2024-03-12T00:52:17.8620997Z 	at org.apache.seatunnel.translation.spark.serialization.SeaTunnelRowConverter.reconvert(SeaTunnelRowConverter.java:166)
2024-03-12T00:52:17.8622207Z 	at org.apache.seatunnel.translation.spark.serialization.SeaTunnelRowConverter.reconvert(SeaTunnelRowConverter.java:203)
2024-03-12T00:52:17.8623423Z 	at org.apache.seatunnel.translation.spark.serialization.SeaTunnelRowConverter.reconvert(SeaTunnelRowConverter.java:169)
2024-03-12T00:52:17.8624651Z 	at org.apache.seatunnel.translation.spark.serialization.SeaTunnelRowConverter.reconvert(SeaTunnelRowConverter.java:156)
2024-03-12T00:52:17.8625933Z 	at org.apache.seatunnel.core.starter.spark.execution.TransformExecuteProcessor$TransformIterator.next(TransformExecuteProcessor.java:182)
2024-03-12T00:52:17.8626761Z 	... 18 more
2024-03-12T00:52:17.8626922Z 
2024-03-12T00:52:17.8627151Z ==================== STDERR end   ====================

Zeta or Flink or Spark Version

No response

Java or Scala Version

No response

Screenshots

https://productionresultssa14.blob.core.windows.net/actions-results/f4b0ae05-af44-4261-9d26-5c580da07424/workflow-job-run-8ea7adc3-2928-51c7-be22-ae070f910ac1/logs/job/job-logs.txt?rsct=text%2Fplain&se=2024-03-14T04%3A08%3A17Z&sig=l03NcLlmGAPq0MGARzURtW7BP2thnPwOjWZ543qA%2FNE%3D&sp=r&spr=https&sr=b&st=2024-03-14T03%3A58%3A12Z&sv=2021-12-02

Are you willing to submit PR?

  • Yes I am willing to submit a PR!

Code of Conduct

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant