Skip to content

[Bug] yarn session fail to submit job(s3 and hive udf) #2796

@www2388258980

Description

@www2388258980

Search before asking

  • I had searched in the issues and found no similar issues.

Java Version

OpenJDK 64-Bit Server VM - Amazon.com Inc. - 1.8/25.372-b07

Scala Version

2.12.x

StreamPark Version

2.12-2.1.0

Flink Version

flink1.16

deploy mode

yarn-session

What happened

s3

# flink s3 configuration
i configure s3 according to this.
when i use common-cli to submit job,it's ok.

bin/yarn-session.sh --detached \
-Dtaskmanager.memory.process.size=4000m \
-Dtaskmanager.memory.managed.size=0m \
-Dtaskmanager.memory.network.min=80m \
-Dtaskmanager.memory.network.max=80m \
-Dtaskmanager.numberOfTaskSlots=4 \
--name flink_test

bin/sql-client.sh -f  xxx.sql

but i submit job by streampark(yarn-session),throwing error.
application mode is no problem.

java.util.concurrent.CompletionException: java.lang.reflect.InvocationTargetException
	at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
	at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
	at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
Caused by: java.lang.reflect.InvocationTargetException
	at sun.reflect.GeneratedMethodAccessor622.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.streampark.flink.client.FlinkClient$.$anonfun$proxy$1(FlinkClient.scala:80)
	at org.apache.streampark.flink.proxy.FlinkShimsProxy$.$anonfun$proxy$1(FlinkShimsProxy.scala:60)
	at org.apache.streampark.common.util.ClassLoaderUtils$.runAsClassLoader(ClassLoaderUtils.scala:38)
	at org.apache.streampark.flink.proxy.FlinkShimsProxy$.proxy(FlinkShimsProxy.scala:60)
	at org.apache.streampark.flink.client.FlinkClient$.proxy(FlinkClient.scala:75)
	at org.apache.streampark.flink.client.FlinkClient$.submit(FlinkClient.scala:49)
	at org.apache.streampark.flink.client.FlinkClient.submit(FlinkClient.scala)
	at org.apache.streampark.console.core.service.impl.ApplicationServiceImpl.lambda$start$10(ApplicationServiceImpl.java:1544)
	at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
	... 3 more
Caused by: org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: Unable to create catalog 'catalog_paimon'.

Catalog options are:
'fs.s3a.connection.maximum'='10'
'metastore'='hive'
'type'='paimon'
'uri'='thrift://xxxxxxxxxxxxx:9083'
'warehouse'='s3://xxxxxxxxxxxx/hadoop/warehouse/'
	at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372)
	at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222)
	at org.apache.flink.client.program.PackagedProgramUtils.getPipelineFromProgram(PackagedProgramUtils.java:158)
	at org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:82)
	at org.apache.streampark.flink.client.trait.FlinkClientTrait.getJobGraph(FlinkClientTrait.scala:242)
	at org.apache.streampark.flink.client.trait.FlinkClientTrait.getJobGraph$(FlinkClientTrait.scala:222)
	at org.apache.streampark.flink.client.impl.YarnSessionClient$.doSubmit(YarnSessionClient.scala:109)
	at org.apache.streampark.flink.client.trait.FlinkClientTrait.submit(FlinkClientTrait.scala:125)
	at org.apache.streampark.flink.client.trait.FlinkClientTrait.submit$(FlinkClientTrait.scala:62)
	at org.apache.streampark.flink.client.impl.YarnSessionClient$.submit(YarnSessionClient.scala:43)
	at org.apache.streampark.flink.client.FlinkClientHandler$.submit(FlinkClientHandler.scala:40)
	at org.apache.streampark.flink.client.FlinkClientHandler.submit(FlinkClientHandler.scala)
	... 15 more
Caused by: org.apache.flink.table.api.ValidationException: Unable to create catalog 'catalog_paimon'.

Catalog options are:
'fs.s3a.connection.maximum'='10'
'metastore'='hive'
'type'='paimon'
'uri'='thrift://xxxxxxxxxxxxx:9083'
'warehouse'='s3://xxxxxxxxxxxx/hadoop/warehouse/'
	at org.apache.flink.table.factories.FactoryUtil.createCatalog(FactoryUtil.java:438)
	at org.apache.flink.table.api.internal.TableEnvironmentImpl.createCatalog(TableEnvironmentImpl.java:1426)
	at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1172)
	at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:730)
	at org.apache.streampark.flink.core.FlinkStreamTableTrait.executeSql(FlinkStreamTableTrait.scala:389)
	at org.apache.streampark.flink.core.FlinkSqlExecutor$.$anonfun$executeSql$3(FlinkSqlExecutor.scala:126)
	at org.apache.streampark.flink.core.FlinkSqlExecutor$.$anonfun$executeSql$3$adapted(FlinkSqlExecutor.scala:57)
	at scala.collection.immutable.List.foreach(List.scala:388)
	at org.apache.streampark.flink.core.FlinkSqlExecutor$.executeSql(FlinkSqlExecutor.scala:57)
	at org.apache.streampark.flink.core.FlinkStreamTableTrait.sql(FlinkStreamTableTrait.scala:87)
	at org.apache.streampark.flink.cli.SqlClient$StreamSqlApp$.handle(SqlClient.scala:69)
	at org.apache.streampark.flink.core.scala.FlinkStreamTable.main(FlinkStreamTable.scala:48)
	at org.apache.streampark.flink.core.scala.FlinkStreamTable.main$(FlinkStreamTable.scala:45)
	at org.apache.streampark.flink.cli.SqlClient$StreamSqlApp$.main(SqlClient.scala:68)
	at org.apache.streampark.flink.cli.SqlClient$.delayedEndpoint$org$apache$streampark$flink$cli$SqlClient$1(SqlClient.scala:58)
	at org.apache.streampark.flink.cli.SqlClient$delayedInit$body.apply(SqlClient.scala:31)
	at scala.Function0.apply$mcV$sp(Function0.scala:34)
	at scala.Function0.apply$mcV$sp$(Function0.scala:34)
	at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
	at scala.App.$anonfun$main$1$adapted(App.scala:76)
	at scala.collection.immutable.List.foreach(List.scala:388)
	at scala.App.main(App.scala:76)
	at scala.App.main$(App.scala:74)
	at org.apache.streampark.flink.cli.SqlClient$.main(SqlClient.scala:31)
	at org.apache.streampark.flink.cli.SqlClient.main(SqlClient.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355)
	... 26 more
Caused by: java.io.UncheckedIOException: org.apache.hadoop.fs.s3a.AWSBadRequestException: getFileStatus on s3://xxxxxxxxxxxx/hadoop/warehouse: com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: GQPNCWMD2078D4V0; S3 Extended Request ID: FRK2DRQwaQj71/kBHpefu5KPV5e/2ceyha5+b1xzZwCBevVzQQS6ZON2TKz2W2d4e+MYq6e/zQI=; Proxy: null), S3 Extended Request ID: FRK2DRQwaQj71/kBHpefu5KPV5e/2ceyha5+b1xzZwCBevVzQQS6ZON2TKz2W2d4e+MYq6e/zQI=:400 Bad Request: Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: GQPNCWMD2078D4V0; S3 Extended Request ID: FRK2DRQwaQj71/kBHpefu5KPV5e/2ceyha5+b1xzZwCBevVzQQS6ZON2TKz2W2d4e+MYq6e/zQI=; Proxy: null)
	at org.apache.paimon.catalog.CatalogFactory.createCatalog(CatalogFactory.java:110)
	at org.apache.paimon.flink.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:68)
	at org.apache.paimon.flink.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:58)
	at org.apache.paimon.flink.FlinkCatalogFactory.createCatalog(FlinkCatalogFactory.java:32)
	at org.apache.flink.table.factories.FactoryUtil.createCatalog(FactoryUtil.java:435)
	... 55 more
Caused by: org.apache.hadoop.fs.s3a.AWSBadRequestException: getFileStatus on s3://xxxxxxxxxxxx/hadoop/warehouse: com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: GQPNCWMD2078D4V0; S3 Extended Request ID: FRK2DRQwaQj71/kBHpefu5KPV5e/2ceyha5+b1xzZwCBevVzQQS6ZON2TKz2W2d4e+MYq6e/zQI=; Proxy: null), S3 Extended Request ID: FRK2DRQwaQj71/kBHpefu5KPV5e/2ceyha5+b1xzZwCBevVzQQS6ZON2TKz2W2d4e+MYq6e/zQI=:400 Bad Request: Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: GQPNCWMD2078D4V0; S3 Extended Request ID: FRK2DRQwaQj71/kBHpefu5KPV5e/2ceyha5+b1xzZwCBevVzQQS6ZON2TKz2W2d4e+MYq6e/zQI=; Proxy: null)
	at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:249)
	at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:175)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3796)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3688)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$exists$34(S3AFileSystem.java:4703)
	at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:499)
	at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:444)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2337)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2356)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.exists(S3AFileSystem.java:4701)
	at org.apache.paimon.s3.HadoopCompliantFileIO.exists(HadoopCompliantFileIO.java:81)
	at org.apache.paimon.fs.PluginFileIO.lambda$exists$4(PluginFileIO.java:67)
	at org.apache.paimon.fs.PluginFileIO.wrap(PluginFileIO.java:104)
	at org.apache.paimon.fs.PluginFileIO.exists(PluginFileIO.java:67)
	at org.apache.paimon.catalog.CatalogFactory.createCatalog(CatalogFactory.java:100)
	... 59 more
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: GQPNCWMD2078D4V0; S3 Extended Request ID: FRK2DRQwaQj71/kBHpefu5KPV5e/2ceyha5+b1xzZwCBevVzQQS6ZON2TKz2W2d4e+MYq6e/zQI=; Proxy: null), S3 Extended Request ID: FRK2DRQwaQj71/kBHpefu5KPV5e/2ceyha5+b1xzZwCBevVzQQS6ZON2TKz2W2d4e+MYq6e/zQI=
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1879)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1418)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1387)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1157)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:814)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:781)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:755)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:715)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:697)
	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:561)
	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:541)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5456)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5403)
	at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1372)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getObjectMetadata$10(S3AFileSystem.java:2545)
	at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:414)
	at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:377)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:2533)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:2513)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3776)
	... 71 more

hive udf

like above, i use bin/sql-client.sh -l hive-udf/ -f xxx.sql,it's ok.
but i submit job by streampark(yarn-session),throwing error.
application mode is no problem.

java.util.concurrent.CompletionException: java.lang.reflect.InvocationTargetException
	at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
	at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
	at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
Caused by: java.lang.reflect.InvocationTargetException
	at sun.reflect.GeneratedMethodAccessor622.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.streampark.flink.client.FlinkClient$.$anonfun$proxy$1(FlinkClient.scala:80)
	at org.apache.streampark.flink.proxy.FlinkShimsProxy$.$anonfun$proxy$1(FlinkShimsProxy.scala:60)
	at org.apache.streampark.common.util.ClassLoaderUtils$.runAsClassLoader(ClassLoaderUtils.scala:38)
	at org.apache.streampark.flink.proxy.FlinkShimsProxy$.proxy(FlinkShimsProxy.scala:60)
	at org.apache.streampark.flink.client.FlinkClient$.proxy(FlinkClient.scala:75)
	at org.apache.streampark.flink.client.FlinkClient$.submit(FlinkClient.scala:49)
	at org.apache.streampark.flink.client.FlinkClient.submit(FlinkClient.scala)
	at org.apache.streampark.console.core.service.impl.ApplicationServiceImpl.lambda$start$10(ApplicationServiceImpl.java:1544)
	at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
	... 3 more
Caused by: org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: SQL validation failed. Failed to register jar resource '[ResourceUri{resourceType=JAR, uri='s3://xxxxxxxxxxxxx/hadoop/jar/participle/hive-udf-1.0-SNAPSHOT.jar'}]' of function 'emr_hive.default.participle'.
	at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372)
	at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222)
	at org.apache.flink.client.program.PackagedProgramUtils.getPipelineFromProgram(PackagedProgramUtils.java:158)
	at org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:82)
	at org.apache.streampark.flink.client.trait.FlinkClientTrait.getJobGraph(FlinkClientTrait.scala:242)
	at org.apache.streampark.flink.client.trait.FlinkClientTrait.getJobGraph$(FlinkClientTrait.scala:222)
	at org.apache.streampark.flink.client.impl.YarnSessionClient$.doSubmit(YarnSessionClient.scala:109)
	at org.apache.streampark.flink.client.trait.FlinkClientTrait.submit(FlinkClientTrait.scala:125)
	at org.apache.streampark.flink.client.trait.FlinkClientTrait.submit$(FlinkClientTrait.scala:62)
	at org.apache.streampark.flink.client.impl.YarnSessionClient$.submit(YarnSessionClient.scala:43)
	at org.apache.streampark.flink.client.FlinkClientHandler$.submit(FlinkClientHandler.scala:40)
	at org.apache.streampark.flink.client.FlinkClientHandler.submit(FlinkClientHandler.scala)
	... 15 more
Caused by: org.apache.flink.table.api.ValidationException: SQL validation failed. Failed to register jar resource '[ResourceUri{resourceType=JAR, uri='s3://xxxxxxxxxxxxx/hadoop/jar/participle/hive-udf-1.0-SNAPSHOT.jar'}]' of function 'emr_hive.default.participle'.
	at org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:186)
	at org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:113)
	at org.apache.flink.table.planner.operations.SqlToOperationConverter.convertViewQuery(SqlToOperationConverter.java:1118)
	at org.apache.flink.table.planner.operations.SqlToOperationConverter.convertCreateView(SqlToOperationConverter.java:1095)
	at org.apache.flink.table.planner.operations.SqlToOperationConverter.convertValidatedSqlNode(SqlToOperationConverter.java:319)
	at org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:262)
	at org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:106)
	at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:723)
	at org.apache.streampark.flink.core.FlinkStreamTableTrait.executeSql(FlinkStreamTableTrait.scala:389)
	at org.apache.streampark.flink.core.FlinkSqlExecutor$.$anonfun$executeSql$3(FlinkSqlExecutor.scala:126)
	at org.apache.streampark.flink.core.FlinkSqlExecutor$.$anonfun$executeSql$3$adapted(FlinkSqlExecutor.scala:57)
	at scala.collection.immutable.List.foreach(List.scala:388)
	at org.apache.streampark.flink.core.FlinkSqlExecutor$.executeSql(FlinkSqlExecutor.scala:57)
	at org.apache.streampark.flink.core.FlinkStreamTableTrait.sql(FlinkStreamTableTrait.scala:87)
	at org.apache.streampark.flink.cli.SqlClient$StreamSqlApp$.handle(SqlClient.scala:69)
	at org.apache.streampark.flink.core.scala.FlinkStreamTable.main(FlinkStreamTable.scala:48)
	at org.apache.streampark.flink.core.scala.FlinkStreamTable.main$(FlinkStreamTable.scala:45)
	at org.apache.streampark.flink.cli.SqlClient$StreamSqlApp$.main(SqlClient.scala:68)
	at org.apache.streampark.flink.cli.SqlClient$.delayedEndpoint$org$apache$streampark$flink$cli$SqlClient$1(SqlClient.scala:58)
	at org.apache.streampark.flink.cli.SqlClient$delayedInit$body.apply(SqlClient.scala:31)
	at scala.Function0.apply$mcV$sp(Function0.scala:34)
	at scala.Function0.apply$mcV$sp$(Function0.scala:34)
	at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
	at scala.App.$anonfun$main$1$adapted(App.scala:76)
	at scala.collection.immutable.List.foreach(List.scala:388)
	at scala.App.main(App.scala:76)
	at scala.App.main$(App.scala:74)
	at org.apache.streampark.flink.cli.SqlClient$.main(SqlClient.scala:31)
	at org.apache.streampark.flink.cli.SqlClient.main(SqlClient.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355)
	... 26 more
Caused by: org.apache.flink.table.api.TableException: Failed to register jar resource '[ResourceUri{resourceType=JAR, uri='s3://xxxxxxxxxxxxx/hadoop/jar/participle/hive-udf-1.0-SNAPSHOT.jar'}]' of function 'emr_hive.default.participle'.
	at org.apache.flink.table.catalog.FunctionCatalog.registerFunctionJarResources(FunctionCatalog.java:710)
	at org.apache.flink.table.catalog.FunctionCatalog.resolvePreciseFunctionReference(FunctionCatalog.java:600)
	at org.apache.flink.table.catalog.FunctionCatalog.lambda$resolveAmbiguousFunctionReference$5(FunctionCatalog.java:653)
	at java.util.Optional.orElseGet(Optional.java:267)
	at org.apache.flink.table.catalog.FunctionCatalog.resolveAmbiguousFunctionReference(FunctionCatalog.java:653)
	at org.apache.flink.table.catalog.FunctionCatalog.lookupFunction(FunctionCatalog.java:414)
	at org.apache.flink.table.planner.catalog.FunctionCatalogOperatorTable.lookupOperatorOverloads(FunctionCatalogOperatorTable.java:91)
	at org.apache.calcite.sql.util.ChainedSqlOperatorTable.lookupOperatorOverloads(ChainedSqlOperatorTable.java:67)
	at org.apache.calcite.sql.validate.SqlValidatorImpl.performUnconditionalRewrites(SqlValidatorImpl.java:1183)
	at org.apache.calcite.sql.validate.SqlValidatorImpl.performUnconditionalRewrites(SqlValidatorImpl.java:1169)
	at org.apache.calcite.sql.validate.SqlValidatorImpl.performUnconditionalRewrites(SqlValidatorImpl.java:1200)
	at org.apache.calcite.sql.validate.SqlValidatorImpl.performUnconditionalRewrites(SqlValidatorImpl.java:1169)
	at org.apache.calcite.sql.validate.SqlValidatorImpl.validateScopedExpression(SqlValidatorImpl.java:945)
	at org.apache.calcite.sql.validate.SqlValidatorImpl.validate(SqlValidatorImpl.java:704)
	at org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:182)
	... 59 more
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 's3'. The scheme is directly supported by Flink through the following plugin(s): flink-s3-fs-hadoop, flink-s3-fs-presto. Please ensure that each plugin resides within its own subfolder within the plugins directory. See https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/filesystems/plugins/ for more information. If you want to use a Hadoop file system for that scheme, please add the scheme to the configuration fs.allowed-fallback-filesystems. For a full list of supported file systems, please see https://nightlies.apache.org/flink/flink-docs-stable/ops/filesystems/.
	at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:515)
	at org.apache.flink.table.resource.ResourceManager.checkJarPath(ResourceManager.java:233)
	at org.apache.flink.table.resource.ResourceManager.checkJarResources(ResourceManager.java:219)
	at org.apache.flink.table.resource.ResourceManager.registerJarResources(ResourceManager.java:93)
	at org.apache.flink.table.catalog.FunctionCatalog.registerFunctionJarResources(FunctionCatalog.java:706)
	... 73 more

Error Exception

No response

Screenshots

No response

Are you willing to submit PR?

  • Yes I am willing to submit a PR!(您是否要贡献这个PR?)

Code of Conduct

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type
No fields configured for issues without a type.

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions