Skip to content

[Bug]: Playground demo failure with PartitionExpressionForMetastore class not found #3933

@turboFei

Description

@turboFei

What happened?

2025/11/13 17:41:27 new sql script submit, current thread pool state. [Active: 0, PoolSize: 1]
2025/11/13 17:41:27 terminal session dose not exists. create session first
2025/11/13 17:41:27 create a new terminal session.
2025/11/13 17:41:27 fetch terminal session: node0auuos8xa1fsukm1cbts4z5go0.node0-null-null-demo_catalog
setup session, session factory: org.apache.amoro.server.terminal.local.LocalSessionFactory
spark.sql.catalog.demo_catalog.s3.secret-access-key  password
spark.sql.catalog.demo_catalog.s3.access-key-id  admin
spark.sql.catalog.demo_catalog.table.self-optimizing.group  local
spark.sql.catalog.demo_catalog.s3.endpoint  http://minio:9000
spark.sql.catalog.demo_catalog.table-formats  ICEBERG
spark.sql.catalog.demo_catalog.warehouse  demo_catalog
spark.sql.catalog.demo_catalog.client.region  us-east-1
spark.sql.catalog.demo_catalog  org.apache.iceberg.spark.SparkCatalog
spark.sql.mixed-format.refresh-catalog-before-usage  true
2025/11/13 17:41:27 session configuration: catalog-url-base => thrift://127.0.0.1:1260
2025/11/13 17:41:27 session configuration: spark.sql.catalog.demo_catalog.s3.secret-access-key => password
2025/11/13 17:41:27 session configuration: session.catalog.demo_catalog.connector => iceberg
2025/11/13 17:41:27 session configuration: spark.sql.catalog.demo_catalog.warehouse => demo_catalog
2025/11/13 17:41:27 session configuration: catalog.demo_catalog.warehouse => demo_catalog
2025/11/13 17:41:27 session configuration: spark.sql.catalog.demo_catalog.s3.access-key-id => admin
2025/11/13 17:41:27 session configuration: catalog.demo_catalog.table-formats => ICEBERG
2025/11/13 17:41:27 session configuration: spark.sql.catalog.demo_catalog => org.apache.iceberg.spark.SparkCatalog
2025/11/13 17:41:27 session configuration: spark.sql.mixed-format.refresh-catalog-before-usage => true
2025/11/13 17:41:27 session configuration: catalog.demo_catalog.client.region => us-east-1
2025/11/13 17:41:27 session configuration: catalog.demo_catalog.table.self-optimizing.group => local
2025/11/13 17:41:27 session configuration: catalog.demo_catalog.s3.endpoint => http://minio:9000
2025/11/13 17:41:27 session configuration: terminal.sensitive-conf-keys => 
2025/11/13 17:41:27 session configuration: session.catalogs => demo_catalog
2025/11/13 17:41:27 session configuration: spark.sql.catalog.demo_catalog.table.self-optimizing.group => local
2025/11/13 17:41:27 session configuration: spark.sql.catalog.demo_catalog.table-formats => ICEBERG
2025/11/13 17:41:27 session configuration: catalog.demo_catalog.s3.secret-access-key => password
2025/11/13 17:41:27 session configuration: session.fetch-size => 1000
2025/11/13 17:41:27 session configuration: catalog.demo_catalog.s3.access-key-id => admin
2025/11/13 17:41:27 session configuration: spark.sql.catalog.demo_catalog.client.region => us-east-1
2025/11/13 17:41:27 session configuration: spark.sql.catalog.demo_catalog.s3.endpoint => http://minio:9000
2025/11/13 17:41:27  
2025/11/13 17:41:27 prepare execute statement, line:1
2025/11/13 17:41:27 CREATE TABLE IF NOT EXISTS db.user ( id INT, name string, ts TIMESTAMP ) USING iceberg PARTITIONED BY (days(ts))
switch to new catalog via: use demo_catalog
2025/11/13 17:41:28 meet exception during execution.
2025/11/13 17:41:28 org.apache.iceberg.hive.RuntimeMetaException: Failed to connect to Hive Metastore
	at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:85)
	at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:34)
	at org.apache.iceberg.ClientPoolImpl.get(ClientPoolImpl.java:143)
	at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:70)
	at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:65)
	at org.apache.iceberg.hive.CachedClientPool.run(CachedClientPool.java:122)
	at org.apache.iceberg.hive.HiveTableOperations.doRefresh(HiveTableOperations.java:147)
	at org.apache.iceberg.BaseMetastoreTableOperations.refresh(BaseMetastoreTableOperations.java:90)
	at org.apache.iceberg.BaseMetastoreTableOperations.current(BaseMetastoreTableOperations.java:73)
	at org.apache.iceberg.BaseMetastoreCatalog.loadTable(BaseMetastoreCatalog.java:49)
	at com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2406)
	at java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853)
	at com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2404)
	at com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2387)
	at com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:108)
	at com.github.benmanes.caffeine.cache.LocalManualCache.get(LocalManualCache.java:62)
	at org.apache.iceberg.CachingCatalog.loadTable(CachingCatalog.java:167)
	at org.apache.iceberg.spark.SparkCatalog.load(SparkCatalog.java:642)
	at org.apache.iceberg.spark.SparkCatalog.loadTable(SparkCatalog.java:160)
	at org.apache.spark.sql.connector.catalog.TableCatalog.tableExists(TableCatalog.java:156)
	at org.apache.spark.sql.execution.datasources.v2.CreateTableExec.run(CreateTableExec.scala:43)
	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)
	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:109)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:169)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94)
	at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584)
	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:560)
	at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94)
	at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81)
	at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:79)
	at org.apache.spark.sql.Dataset.(Dataset.scala:220)
	at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
	at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:622)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:617)
	at org.apache.amoro.server.terminal.local.LocalTerminalSession.executeStatement(LocalTerminalSession.java:78)
	at org.apache.amoro.server.terminal.TerminalSessionContext$ExecutionTask.executeStatement(TerminalSessionContext.java:302)
	at org.apache.amoro.server.terminal.TerminalSessionContext$ExecutionTask.execute(TerminalSessionContext.java:266)
	at org.apache.amoro.server.terminal.TerminalSessionContext$ExecutionTask.lambda$get$0(TerminalSessionContext.java:227)
	at org.apache.amoro.table.TableMetaStore.call(TableMetaStore.java:268)
	at org.apache.amoro.table.TableMetaStore.doAs(TableMetaStore.java:241)
	at org.apache.amoro.server.terminal.TerminalSessionContext$ExecutionTask.get(TerminalSessionContext.java:209)
	at org.apache.amoro.server.terminal.TerminalSessionContext$ExecutionTask.get(TerminalSessionContext.java:184)
	at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
	at org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:86)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:95)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:148)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:119)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.iceberg.common.DynMethods$UnboundMethod.invokeChecked(DynMethods.java:62)
	at org.apache.iceberg.common.DynMethods$UnboundMethod.invoke(DynMethods.java:74)
	at org.apache.iceberg.common.DynMethods$StaticMethod.invoke(DynMethods.java:187)
	at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:63)
	... 62 more
Caused by: java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:84)
	... 74 more
Caused by: MetaException(message:Error loading PartitionExpressionProxy: org.apache.hadoop.hive.ql.optimizer.ppr.PartitionExpressionForMetastore class not found)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:84)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:93)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:8667)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:169)
	... 79 more
Caused by: java.lang.RuntimeException: Error loading PartitionExpressionProxy: org.apache.hadoop.hive.ql.optimizer.ppr.PartitionExpressionForMetastore class not found
	at org.apache.hadoop.hive.metastore.ObjectStore.createExpressionProxy(ObjectStore.java:541)
	at org.apache.hadoop.hive.metastore.ObjectStore.initializeHelper(ObjectStore.java:494)
	at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:420)
	at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:375)
	at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77)
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:159)
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:126)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.(RawStoreProxy.java:59)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStoreForConf(HiveMetaStore.java:718)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:696)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:690)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:767)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:538)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:80)
	... 82 more

Affects Versions

0.8.1

What table formats are you seeing the problem on?

No response

What engines are you seeing the problem on?

No response

How to reproduce

No response

Relevant log output

Anything else

No response

Are you willing to submit a PR?

  • Yes I am willing to submit a PR!

Code of Conduct

  • I agree to follow this project's Code of Conduct

Metadata

Metadata

Assignees

No one assigned

    Labels

    type:bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions