Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert Hive errors in IcebergParquetFileWriter to Iceberg errors #21932

Closed
wants to merge 1 commit into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,12 @@
import java.util.stream.Stream;

import static io.trino.parquet.reader.MetadataReader.createParquetMetadata;
import static io.trino.plugin.hive.HiveErrorCode.HIVE_WRITER_CLOSE_ERROR;
import static io.trino.plugin.hive.HiveErrorCode.HIVE_WRITER_DATA_ERROR;
import static io.trino.plugin.hive.HiveErrorCode.HIVE_WRITE_VALIDATION_FAILED;
import static io.trino.plugin.iceberg.IcebergErrorCode.ICEBERG_WRITER_CLOSE_ERROR;
import static io.trino.plugin.iceberg.IcebergErrorCode.ICEBERG_WRITER_DATA_ERROR;
import static io.trino.plugin.iceberg.IcebergErrorCode.ICEBERG_WRITE_VALIDATION_FAILED;
import static io.trino.plugin.iceberg.util.ParquetUtil.footerMetrics;
import static io.trino.spi.StandardErrorCode.GENERIC_INTERNAL_ERROR;
import static java.lang.String.format;
Expand Down Expand Up @@ -106,19 +112,46 @@ public long getMemoryUsage()
@Override
public void appendRows(Page dataPage)
{
parquetFileWriter.appendRows(dataPage);
try {
parquetFileWriter.appendRows(dataPage);
}
catch (TrinoException e) {
if (e.getErrorCode() == HIVE_WRITER_DATA_ERROR.toErrorCode()) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

having Iceberg and Delta connectors remap error codes in a bunch of places, i would rather think we should

  • consider this OK for Iceberg and Delta to reuse hive error codes (no remapping)
  • decouple parquet writer from hive, so that it doesn't use hive error codes (eg if we provide connector specific error codes in the constructor, no need for remapping)

cc @raunaqmorarka @electrum

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. Anyway this fix doesn't seem to make sense.

throw new TrinoException(ICEBERG_WRITER_DATA_ERROR, e);
}
throw e;
}
}

@Override
public Closeable commit()
{
return parquetFileWriter.commit();
try {
return parquetFileWriter.commit();
}
catch (TrinoException e) {
if (e.getErrorCode() == HIVE_WRITER_CLOSE_ERROR.toErrorCode()) {
throw new TrinoException(ICEBERG_WRITER_CLOSE_ERROR, "Error committing write parquet to Iceberg", e);
}
else if (e.getErrorCode() == HIVE_WRITE_VALIDATION_FAILED.toErrorCode()) {
throw new TrinoException(ICEBERG_WRITE_VALIDATION_FAILED, e);
}
throw e;
}
}

@Override
public void rollback()
{
parquetFileWriter.rollback();
try {
parquetFileWriter.rollback();
}
catch (TrinoException e) {
if (e.getErrorCode() == HIVE_WRITER_CLOSE_ERROR.toErrorCode()) {
throw new TrinoException(ICEBERG_WRITER_CLOSE_ERROR, "Error rolling back write parquet to Iceberg", e);
}
throw e;
}
}

@Override
Expand Down