Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: When using spark query type timestamp column failed #978

Closed
1 task done
shendanfengg opened this issue Jan 5, 2023 · 3 comments
Closed
1 task done

[Bug]: When using spark query type timestamp column failed #978

shendanfengg opened this issue Jan 5, 2023 · 3 comments
Labels
module:core Core module module:mixed-spark Spark module for Mixed Format type:bug Something isn't working
Milestone

Comments

@shendanfengg
Copy link
Contributor

What happened?

When using Flink to write the timestamp column to change store, an exception is thrown if using spark mor to read the data

Affects Versions

0.3.x

What engines are you seeing the problem on?

Spark

How to reproduce

  1. Create a new table with timestamp column
  2. Use flink to write data to this table
  3. Reading with Spark

Relevant log output

2023-01-04 19:37:21,480 java.sql.SQLException: org.apache.kyuubi.KyuubiSQLException: org.apache.kyuubi.KyuubiSQLException: Error operating ExecuteStatement: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 6.0 failed 4 times, most recent failure: Lost task 0.3 in stage 6.0 (TID 18) (10.113.224.42 executor 17): java.lang.IllegalArgumentException: Cannot decode dictionary of type: INT96
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ParquetDictionaryRowGroupFilter$EvalVisitor.dict(ParquetDictionaryRowGroupFilter.java:423)
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ParquetDictionaryRowGroupFilter$EvalVisitor.gt(ParquetDictionaryRowGroupFilter.java:243)
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ParquetDictionaryRowGroupFilter$EvalVisitor.gt(ParquetDictionaryRowGroupFilter.java:81)
	at com.netease.arctic.shade.org.apache.iceberg.expressions.ExpressionVisitors$BoundExpressionVisitor.predicate(ExpressionVisitors.java:135)
	at com.netease.arctic.shade.org.apache.iceberg.expressions.ExpressionVisitors.visitEvaluator(ExpressionVisitors.java:346)
	at com.netease.arctic.shade.org.apache.iceberg.expressions.ExpressionVisitors.visitEvaluator(ExpressionVisitors.java:361)
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ParquetDictionaryRowGroupFilter$EvalVisitor.eval(ParquetDictionaryRowGroupFilter.java:117)
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ParquetDictionaryRowGroupFilter$EvalVisitor.access$100(ParquetDictionaryRowGroupFilter.java:81)
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ParquetDictionaryRowGroupFilter.shouldRead(ParquetDictionaryRowGroupFilter.java:75)
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ReadConf.<init>(ReadConf.java:109)
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ParquetReader.init(ParquetReader.java:66)
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ParquetReader.iterator(ParquetReader.java:77)
	at com.netease.arctic.shade.org.apache.iceberg.io.CloseableIterable$ConcatCloseableIterable$ConcatCloseableIterator.<init>(CloseableIterable.java:152)
	at com.netease.arctic.shade.org.apache.iceberg.io.CloseableIterable$ConcatCloseableIterable$ConcatCloseableIterator.<init>(CloseableIterable.java:143)
	at com.netease.arctic.shade.org.apache.iceberg.io.CloseableIterable$ConcatCloseableIterable.iterator(CloseableIterable.java:138)
	at com.netease.arctic.table.TableMetaStore.lambda$doAs$0(TableMetaStore.java:315)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:360)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873)
	at com.netease.arctic.table.TableMetaStore.doAs(TableMetaStore.java:313)
	at com.netease.arctic.io.ArcticHadoopFileIO.doAs(ArcticHadoopFileIO.java:177)
	at com.netease.arctic.io.reader.BaseArcticDataReader.readData(BaseArcticDataReader.java:107)
	at com.netease.arctic.spark.reader.KeyedSparkBatchScan$RowReader.next(KeyedSparkBatchScan.java:243)
	at org.apache.spark.sql.execution.datasources.v2.PartitionIterator.hasNext(DataSourceRDD.scala:79)
	at org.apache.spark.sql.execution.datasources.v2.MetricsIterator.hasNext(DataSourceRDD.scala:112)
	at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:346)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1469)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2264)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2213)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2212)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2212)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1085)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1085)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1085)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2451)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2393)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2382)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:874)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2196)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2217)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2236)
	at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:488)
	at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:441)
	at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:47)
	at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3696)
	at org.apache.spark.sql.Dataset.$anonfun$head$1(Dataset.scala:2722)
	at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3687)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:109)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:169)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3685)
	at org.apache.spark.sql.Dataset.head(Dataset.scala:2722)
	at org.apache.spark.sql.Dataset.take(Dataset.scala:2929)
	at org.apache.kyuubi.engine.spark.operation.ExecuteStatement.$anonfun$executeStatement$1(ExecuteStatement.scala:107)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.apache.kyuubi.engine.spark.operation.SparkOperation.withLocalProperties(SparkOperation.scala:87)
	at org.apache.kyuubi.engine.spark.operation.ExecuteStatement.org$apache$kyuubi$engine$spark$operation$ExecuteStatement$$executeStatement(ExecuteStatement.scala:89)
	at org.apache.kyuubi.engine.spark.operation.ExecuteStatement$$anon$1.run(ExecuteStatement.scala:125)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: Cannot decode dictionary of type: INT96
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ParquetDictionaryRowGroupFilter$EvalVisitor.dict(ParquetDictionaryRowGroupFilter.java:423)
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ParquetDictionaryRowGroupFilter$EvalVisitor.gt(ParquetDictionaryRowGroupFilter.java:243)
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ParquetDictionaryRowGroupFilter$EvalVisitor.gt(ParquetDictionaryRowGroupFilter.java:81)
	at com.netease.arctic.shade.org.apache.iceberg.expressions.ExpressionVisitors$BoundExpressionVisitor.predicate(ExpressionVisitors.java:135)
	at com.netease.arctic.shade.org.apache.iceberg.expressions.ExpressionVisitors.visitEvaluator(ExpressionVisitors.java:346)
	at com.netease.arctic.shade.org.apache.iceberg.expressions.ExpressionVisitors.visitEvaluator(ExpressionVisitors.java:361)
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ParquetDictionaryRowGroupFilter$EvalVisitor.eval(ParquetDictionaryRowGroupFilter.java:117)
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ParquetDictionaryRowGroupFilter$EvalVisitor.access$100(ParquetDictionaryRowGroupFilter.java:81)
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ParquetDictionaryRowGroupFilter.shouldRead(ParquetDictionaryRowGroupFilter.java:75)
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ReadConf.<init>(ReadConf.java:109)
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ParquetReader.init(ParquetReader.java:66)
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ParquetReader.iterator(ParquetReader.java:77)
	at com.netease.arctic.shade.org.apache.iceberg.io.CloseableIterable$ConcatCloseableIterable$ConcatCloseableIterator.<init>(CloseableIterable.java:152)
	at com.netease.arctic.shade.org.apache.iceberg.io.CloseableIterable$ConcatCloseableIterable$ConcatCloseableIterator.<init>(CloseableIterable.java:143)
	at com.netease.arctic.shade.org.apache.iceberg.io.CloseableIterable$ConcatCloseableIterable.iterator(CloseableIterable.java:138)
	at com.netease.arctic.table.TableMetaStore.lambda$doAs$0(TableMetaStore.java:315)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:360)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873)
	at com.netease.arctic.table.TableMetaStore.doAs(TableMetaStore.java:313)
	at com.netease.arctic.io.ArcticHadoopFileIO.doAs(ArcticHadoopFileIO.java:177)
	at com.netease.arctic.io.reader.BaseArcticDataReader.readData(BaseArcticDataReader.java:107)
	at com.netease.arctic.spark.reader.KeyedSparkBatchScan$RowReader.next(KeyedSparkBatchScan.java:243)
	at org.apache.spark.sql.execution.datasources.v2.PartitionIterator.hasNext(DataSourceRDD.scala:79)
	at org.apache.spark.sql.execution.datasources.v2.MetricsIterator.hasNext(DataSourceRDD.scala:112)
	at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:346)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1469)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
	... 3 more

	at org.apache.kyuubi.KyuubiSQLException$.apply(KyuubiSQLException.scala:69)
	at org.apache.kyuubi.engine.spark.operation.SparkOperation$$anonfun$onError$1.applyOrElse(SparkOperation.scala:112)
	at org.apache.kyuubi.engine.spark.operation.SparkOperation$$anonfun$onError$1.applyOrElse(SparkOperation.scala:96)
	at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38)
	at org.apache.kyuubi.engine.spark.operation.ExecuteStatement.$anonfun$executeStatement$1(ExecuteStatement.scala:113)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.apache.kyuubi.engine.spark.operation.SparkOperation.withLocalProperties(SparkOperation.scala:87)
	at org.apache.kyuubi.engine.spark.operation.ExecuteStatement.org$apache$kyuubi$engine$spark$operation$ExecuteStatement$$executeStatement(ExecuteStatement.scala:89)
	at org.apache.kyuubi.engine.spark.operation.ExecuteStatement$$anon$1.run(ExecuteStatement.scala:125)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 6.0 failed 4 times, most recent failure: Lost task 0.3 in stage 6.0 (TID 18) (10.113.224.42 executor 17): java.lang.IllegalArgumentException: Cannot decode dictionary of type: INT96
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ParquetDictionaryRowGroupFilter$EvalVisitor.dict(ParquetDictionaryRowGroupFilter.java:423)
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ParquetDictionaryRowGroupFilter$EvalVisitor.gt(ParquetDictionaryRowGroupFilter.java:243)
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ParquetDictionaryRowGroupFilter$EvalVisitor.gt(ParquetDictionaryRowGroupFilter.java:81)
	at com.netease.arctic.shade.org.apache.iceberg.expressions.ExpressionVisitors$BoundExpressionVisitor.predicate(ExpressionVisitors.java:135)
	at com.netease.arctic.shade.org.apache.iceberg.expressions.ExpressionVisitors.visitEvaluator(ExpressionVisitors.java:346)
	at com.netease.arctic.shade.org.apache.iceberg.expressions.ExpressionVisitors.visitEvaluator(ExpressionVisitors.java:361)
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ParquetDictionaryRowGroupFilter$EvalVisitor.eval(ParquetDictionaryRowGroupFilter.java:117)
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ParquetDictionaryRowGroupFilter$EvalVisitor.access$100(ParquetDictionaryRowGroupFilter.java:81)
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ParquetDictionaryRowGroupFilter.shouldRead(ParquetDictionaryRowGroupFilter.java:75)
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ReadConf.<init>(ReadConf.java:109)
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ParquetReader.init(ParquetReader.java:66)
	at com.netease.arctic.shade.org.apache.iceberg.parquet.ParquetReader.iterator(ParquetReader.java:77)
	at com.netease.arctic.shade.org.apache.iceberg.io.CloseableIterable$ConcatCloseableIterable$ConcatCloseableIterator.<init>(CloseableIterable.java:152)
	at com.netease.arctic.shade.org.apache.iceberg.io.CloseableIterable$ConcatCloseableIterable$ConcatCloseableIterator.<init>(CloseableIterable.java:143)
	at com.netease.arctic.shade.org.apache.iceberg.io.CloseableIterable$ConcatCloseableIterable.iterator(CloseableIterable.java:138)
	at com.netease.arctic.table.TableMetaStore.lambda$doAs$0(TableMetaStore.java:315)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:360)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873)
	at com.netease.arctic.table.TableMetaStore.doAs(TableMetaStore.java:313)
	at com.netease.arctic.io.ArcticHadoopFileIO.doAs(ArcticHadoopFileIO.java:177)
	at com.netease.arctic.io.reader.BaseArcticDataReader.readData(BaseArcticDataReader.java:107)
	at com.netease.arctic.spark.reader.KeyedSparkBatchScan$RowReader.next(KeyedSparkBatchScan.java:243)
	at org.apache.spark.sql.execution.datasources.v2.PartitionIterator.hasNext(DataSourceRDD.scala:79)
	at org.apache.spark.sql.execution.datasources.v2.MetricsIterator.hasNext(DataSourceRDD.scala:112)
	at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
	at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:346)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1469)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)

Anything else

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct
@shendanfengg shendanfengg added type:bug Something isn't working module:mixed-spark Spark module for Mixed Format module:core Core module labels Jan 5, 2023
@baiyangtx
Copy link
Contributor

@hellojinsilei CC

@baiyangtx baiyangtx added this to the Release 0.4.1 milestone Jan 5, 2023
@hellojinsilei
Copy link
Contributor

You can convert the field type in filter:
where FROM_UNIXTIME(cast(col as bigint),'yyyy-MM-dd') >'2023-01-06'

@baiyangtx
Copy link
Contributor

Not reproducred

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module:core Core module module:mixed-spark Spark module for Mixed Format type:bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants