Skip to content

Latest commit

 

History

History
191 lines (123 loc) · 12.1 KB

import-parquet-files.md

File metadata and controls

191 lines (123 loc) · 12.1 KB
title summary
Import Apache Parquet Files from Amazon S3 or GCS into TiDB Cloud
Learn how to import Apache Parquet files from Amazon S3 or GCS into TiDB Cloud.

Import Apache Parquet Files from Amazon S3 or GCS into TiDB Cloud

You can import both uncompressed and Snappy compressed Apache Parquet format data files to TiDB Cloud. This document describes how to import Parquet files from Amazon Simple Storage Service (Amazon S3) or Google Cloud Storage (GCS) into TiDB Cloud.

Note:

  • TiDB Cloud only supports importing Parquet files into empty tables. To import data into an existing table that already contains data, you can use TiDB Cloud to import the data into a temporary empty table by following this document, and then use the INSERT SELECT statement to copy the data to the target existing table.
  • If there is a changefeed in a Dedicated Tier cluster, you cannot import data to the cluster (the Import Data button will be disabled), because the current import data feature uses the physical import mode. In this mode, the imported data does not generate change logs, so the changefeed cannot detect the imported data.

Step 1. Prepare the Parquet files

Note:

Currently, TiDB Cloud does not support importing Parquet files that contain any of the following data types. If Parquet files to be imported contain such data types, you need to first regenerate the Parquet files using the supported data types (for example, STRING). Alternatively, you could use a service such as AWS Glue to transform data types easily.

  • LIST
  • NEST STRUCT
  • BOOL
  • ARRAY
  • MAP
  1. If a Parquet file is larger than 256 MB, consider splitting it into smaller files, each with a size around 256 MB.

    TiDB Cloud supports importing very large Parquet files but performs best with multiple input files around 256 MB in size. This is because TiDB Cloud can process multiple files in parallel, which can greatly improve the import speed.

  2. Name the Parquet files as follows:

    • If a Parquet file contains all data of an entire table, name the file in the ${db_name}.${table_name}.parquet format, which maps to the ${db_name}.${table_name} table when you import the data.
    • If the data of one table is separated into multiple Parquet files, append a numeric suffix to these Parquet files. For example, ${db_name}.${table_name}.000001.parquet and ${db_name}.${table_name}.000002.parquet. The numeric suffixes can be inconsecutive but must be in ascending order. You also need to add extra zeros before the number to ensure all the suffixes are in the same length.

    Note:

    If you cannot update the Parquet filenames according to the preceding rules in some cases (for example, the Parquet file links are also used by your other programs), you can keep the filenames unchanged and use the File Pattern in Step 4 to import your source data to a single target table.

Step 2. Create the target table schemas

Because Parquet files do not contain schema information, before importing data from Parquet files into TiDB Cloud, you need to create the table schemas using either of the following methods:

  • Method 1: In TiDB Cloud, create the target databases and tables for your source data.

  • Method 2: In the Amazon S3 or GCS directory where the Parquet files are located, create the target table schema files for your source data as follows:

    1. Create database schema files for your source data.

      If your Parquet files follow the naming rules in Step 1, the database schema files are optional for the data import. Otherwise, the database schema files are mandatory.

      Each database schema file must be in the ${db_name}-schema-create.sql format and contain a CREATE DATABASE DDL statement. With this file, TiDB Cloud will create the ${db_name} database to store your data when you import the data.

      For example, if you create a mydb-scehma-create.sql file that contains the following statement, TiDB Cloud will create the mydb database when you import the data.

      {{< copyable "sql" >}}

      CREATE DATABASE mydb;
    2. Create table schema files for your source data.

      If you do not include the table schema files in the Amazon S3 or GCS directory where the Parquet files are located, TiDB Cloud will not create the corresponding tables for you when you import the data.

      Each table schema file must be in the ${db_name}.${table_name}-schema.sql format and contain a CREATE TABLE DDL statement. With this file, TiDB Cloud will create the ${db_table} table in the ${db_name} database when you import the data.

      For example, if you create a mydb.mytable-schema.sql file that contains the following statement, TiDB Cloud will create the mytable table in the mydb database when you import the data.

      {{< copyable "sql" >}}

      CREATE TABLE mytable (
      ID INT,
      REGION VARCHAR(20),
      COUNT INT );

      Note:

      Each ${db_name}.${table_name}-schema.sql file should only contain a single DDL statement. If the file contains multiple DDL statements, only the first one takes effect.

Step 3. Configure cross-account access

To allow TiDB Cloud to access the Parquet files in the Amazon S3 or GCS bucket, do one of the following:

Step 4. Import Parquet files to TiDB Cloud

To import the Parquet files to TiDB Cloud, take the following steps:

  1. Open the Import page for your target cluster.

    1. Log in to the TiDB Cloud console and navigate to the Clusters page of your project.

      Tip:

      If you have multiple projects, you can switch to the target project in the left navigation pane of the Clusters page.

    2. Click the name of your target cluster to go to its overview page, and then click Import in the left navigation pane.

  2. On the Import page:

    • For a Dedicated Tier cluster, click Import Data in the upper-right corner.
    • For a Serverless Tier cluster, click the import data from S3 link above the upload area.
  3. Provide the following information for the source Parquet files:

    • Data format: select Parquet.
    • Bucket URI: select the bucket URI where your Parquet files are located.
    • Role ARN: (This field is visible only for AWS S3): enter the Role ARN value for Role ARN.

    If the region of the bucket is different from your cluster, confirm the compliance of cross region. Click Next.

    TiDB Cloud starts validating whether it can access your data in the specified bucket URI. After validation, TiDB Cloud tries to scan all the files in the data source using the default file naming pattern, and returns a scan summary result on the left side of the next page. If you get the AccessDenied error, see Troubleshoot Access Denied Errors during Data Import from S3.

  4. Modify the file patterns and add the table filter rules if needed.

    • File Pattern: modify the file pattern if you want to import Parquet files whose filenames match a certain pattern to a single target table.

      Note:

      When you use this feature, one import task can only import data to a single table at a time. If you want to use this feature to import data into different tables, you need to import several times, each time specifying a different target table.

      To modify the file pattern, click Modify, specify a custom mapping rule between Parquet files and a single target table in the following fields, and then click Scan. After that, the data source files will be re-scanned using the provided custom mapping rule.

      • Source file name: enter a pattern that matches the names of the Parquet files to be imported. If you have one Parquet file only, you can enter the filename here directly. Note that the names of the Parquet files must include the suffix .parquet.

        For example:

        • my-data?.parquet: all Parquet files starting with my-data and one character (such as my-data1.parquet and my-data2.parquet) will be imported into the same target table.
        • my-data*.parquet: all Parquet files starting with my-data will be imported into the same target table.
      • Target table name: enter the name of the target table in TiDB Cloud, which must be in the ${db_name}.${table_name} format. For example, mydb.mytable. Note that this field only accepts one specific table name, so wildcards are not supported.

    • Tables Filter: if you want to filter which tables to be imported, you can specify table filter rules in this area.

      For example:

      • db01.*: all tables in the db01 database will be imported.
      • !db02.*: except the tables in the db02 database, all other tables will be imported. ! is used to exclude tables that do not need to be imported.
      • *.* : all tables will be imported.

      For more information, see table filter syntax.

  5. Click Next.

  6. On the Preview page, confirm the data to be imported and then click Start Import.

  7. When the import progress shows Finished, check the imported tables.

    If the number is zero, it means no data files matched the value you entered in the Source file name field. In this case, check whether there are any typos in the Source file name field and try again.

  8. After the import task is completed, you can click Query Data on the Import page to query your imported data. For more information about how to use Chat2Qury, see Explore Your Data with AI-Powered Chat2Query.

When you run an import task, if any unsupported or invalid conversions are detected, TiDB Cloud terminates the import job automatically and reports an importing error.

If you get an importing error, do the following:

  1. Drop the partially imported table.

  2. Check the table schema file. If there are any errors, correct the table schema file.

  3. Check the data types in the Parquet files.

    If the Parquet files contain any unsupported data types (for example, NEST STRUCT, ARRAY, or MAP), you need to regenerate the Parquet files using supported data types (for example, STRING).

  4. Try the import task again.

Supported data types

The following table lists the supported Parquet data types that can be imported to TiDB Cloud.

Parquet Primitive Type Parquet Logical Type Types in TiDB or MySQL
DOUBLE DOUBLE DOUBLE
FLOAT
FIXED_LEN_BYTE_ARRAY(9) DECIMAL(20,0) BIGINT UNSIGNED
FIXED_LEN_BYTE_ARRAY(N) DECIMAL(p,s) DECIMAL
NUMERIC
INT32 DECIMAL(p,s) DECIMAL
NUMERIC
INT32 N/A INT
MEDIUMINT
YEAR
INT64 DECIMAL(p,s) DECIMAL
NUMERIC
INT64 N/A BIGINT
INT UNSIGNED
MEDIUMINT UNSIGNED
INT64 TIMESTAMP_MICROS DATETIME
TIMESTAMP
BYTE_ARRAY N/A BINARY
BIT
BLOB
CHAR
LINESTRING
LONGBLOB
MEDIUMBLOB
MULTILINESTRING
TINYBLOB
VARBINARY
BYTE_ARRAY STRING ENUM
DATE
DECIMAL
GEOMETRY
GEOMETRYCOLLECTION
JSON
LONGTEXT
MEDIUMTEXT
MULTIPOINT
MULTIPOLYGON
NUMERIC
POINT
POLYGON
SET
TEXT
TIME
TINYTEXT
VARCHAR
SMALLINT N/A INT32
SMALLINT UNSIGNED N/A INT32
TINYINT N/A INT32
TINYINT UNSIGNED N/A INT32