-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
java.io.IOException: Mkdirs failed to create file:/user/hive/warehouse/bench/metadata #3079
Comments
There were changes around catalog configuration in 0.12. Maybe this effects the FlinkCatalog as well. I would try to check how the HiveCatalog should be parameterized in Flink use-cases. Thanks, Peter |
@shengkui , It's insecure to expose your s3 accessKey & accessSecret in this open issue. So I masked them in the issue. About the issue, I will also try under my local host. |
Thanks |
@shengkui I think you need to use the master branch to build the latest flink-iceberg-runtime jar because the PR #2666 was got merged only in master branch now ( Not get released in 0.12.0). I tried to use the connector to executing the following SQL: ./bin/sql-client.sh embedded -j /Users/openinx/software/apache-iceberg/flink-runtime/build/libs/iceberg-flink-runtime-5f90476.jar shell
Flink SQL> CREATE TABLE iceberg_table (
> id BIGINT,
> data STRING
> ) WITH (
> 'connector'='iceberg',
> 'catalog-name'='hive_prod',
> 'uri'='thrift://localhost:9083',
> 'warehouse'='file:///Users/openinx/test/iceberg-warehouse'
> );
[INFO] Table has been created.
Flink SQL> INSERT INTO iceberg_table values (1, 'AAA'), (2, 'BBB'), (3, 'CCC');
[INFO] Submitting SQL update statement to the cluster...
[INFO] Table update statement has been successfully submitted to the cluster:
Job ID: c9742d48cbd35502f9a3093d0d668543
Flink SQL> select * from iceberg_table ;
+----+------+
| id | data |
+----+------+
| 1 | AAA |
| 2 | BBB |
| 3 | CCC |
+----+------+
3 rows in set All seems OK. |
@openinx Thanks for your help , I'll try it. |
@openinx I've tried the latest flink-iceberg-runtime jar(built from master branch), it works when I use "file://" as warehouse. But it doesn't work with "s3a://". I've put following JARs into flink/lib/ directory:
Is there any other JAR should be placed under Flink's lib/ directory? Could you please give me some advice for configuration of "s3a"? I'm not very sure if I've written right parameter to the right place. |
@shengkui , I don't have a correct aws s3 enviroment, but I've configured this flink connector correctly in our alibaba public object storage before (Just use the open hadoop distribution with aliyun-oss hdfs implementation). The first thing you need to do is : configurate the hadoop hdfs correctly by setting the key-values in core-site.xml and verify this by using We don't need to configure any s3 configurations in the flink table properties. There's a document describing how to write data into aliyun oss in Chinese. You may need to replace all the oss configurations to s3 according to the doc. |
@openinx thanks, I'll have a try. |
@openinx I've tried following the document you provided, and failed again. Following is picked from the log of flink sql-client:
I've tried hadoop catalog with S3(without hive-metastore), it works well. But there is someone said, iceberg need hive-metastore for S3 storage(#1468). I know iceberg have implemented Glue Catalog, but it for AWS. Is there any solution to use S3 without hive-metastore? |
This issue is caused by the version of hive-metastore. I changed the hive to 2.3.9, then it's ok to create table and insert in flink sql client. |
I met a exception " Mkdirs failed to create file" while using Iceberg (v0.12.0) + Flink (v1.12.5) + hive metastore (v3.0.0) + s3a (ceph) storage.
The log of flink sql-client:
The path of metadata in the error message is "/user/hive/warehouse/bench/metadata", but it's not my real path. I've took a look at the source code of iceberg, it's the default value of warehouse. I've not found any setting about this in the document of Iceberg, have I missed something?
The text was updated successfully, but these errors were encountered: