Skip to content

[WIP][HUDI-3625][RFC-60] OSSStorageStrategy POC#12460

Open
zhangyue19921010 wants to merge 6 commits intomasterfrom
rfc-60-ossstorage-poc
Open

[WIP][HUDI-3625][RFC-60] OSSStorageStrategy POC#12460
zhangyue19921010 wants to merge 6 commits intomasterfrom
rfc-60-ossstorage-poc

Conversation

@zhangyue19921010
Copy link
Contributor

@zhangyue19921010 zhangyue19921010 commented Dec 11, 2024

Change Logs

OSS Storage strategy POC

For local testing, write data using Spark and query data with Spark, using UT as an example. Assume /tmp/bucketA/ is the user's S3 bucket. The final data distribution is as follows

UT can pass directly.

hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/spark/sql/hudi/dml/TestInsertTable.scala
test("Test Insert Into with values and OSS storage")

The distribution of metadata

zhangyue61@ZBMac-C02FN40EMD6P ~ % cd /private/var/folders/ww/dc150vl50815wqgbsx_98w9w0000gp/T/spark-8212a35d-7784-47fc-a496-ce499753edf1
zhangyue61@ZBMac-C02FN40EMD6P spark-8212a35d-7784-47fc-a496-ce499753edf1 % ll
total 0
drwxr-xr-x    3 zhangyue61  staff     96 12 11 11:12 ./
drwx------@ 317 zhangyue61  staff  10144 12 11 11:12 ../
drwxr-xr-x    3 zhangyue61  staff     96 12 11 11:12 h1/
zhangyue61@ZBMac-C02FN40EMD6P spark-8212a35d-7784-47fc-a496-ce499753edf1 % tree -a
.
└── h1
    └── .hoodie
        ├── .aux
        │   └── .bootstrap
        │       ├── .fileids
        │       └── .partitions
        ├── .hoodie.properties.crc
        ├── .schema
        ├── .temp
        ├── hoodie.properties
        ├── metadata
        │   ├── .hoodie
        │   │   ├── .aux
        │   │   │   └── .bootstrap
        │   │   │       ├── .fileids
        │   │   │       └── .partitions
        │   │   ├── .hoodie.properties.crc
        │   │   ├── .schema
        │   │   ├── .temp
        │   │   ├── hoodie.properties
        │   │   └── timeline
        │   │       ├── .00000000000000000.deltacommit.inflight.crc
        │   │       ├── .00000000000000000.deltacommit.requested.crc
        │   │       ├── .00000000000000000_20241211031254712.deltacommit.crc
        │   │       ├── .20241211031249856.deltacommit.inflight.crc
        │   │       ├── .20241211031249856.deltacommit.requested.crc
        │   │       ├── .20241211031249856_20241211031302018.deltacommit.crc
        │   │       ├── 00000000000000000.deltacommit.inflight
        │   │       ├── 00000000000000000.deltacommit.requested
        │   │       ├── 00000000000000000_20241211031254712.deltacommit
        │   │       ├── 20241211031249856.deltacommit.inflight
        │   │       ├── 20241211031249856.deltacommit.requested
        │   │       ├── 20241211031249856_20241211031302018.deltacommit
        │   │       └── history
        │   └── files
        │       ├── ..files-0000-0_00000000000000000.log.1_0-0-0.crc
        │       ├── ..files-0000-0_20241211031249856.log.1_0-18-229.crc
        │       ├── ..hoodie_partition_metadata.crc
        │       ├── .files-0000-0_0-5-4_00000000000000000.hfile.crc
        │       ├── .files-0000-0_00000000000000000.log.1_0-0-0
        │       ├── .files-0000-0_20241211031249856.log.1_0-18-229
        │       ├── .hoodie_partition_metadata
        │       └── files-0000-0_0-5-4_00000000000000000.hfile
        └── timeline
            ├── .20241211031249856.commit.requested.crc
            ├── .20241211031249856.inflight.crc
            ├── .20241211031249856_20241211031302213.commit.crc
            ├── 20241211031249856.commit.requested
            ├── 20241211031249856.inflight
            ├── 20241211031249856_20241211031302213.commit
            └── history

22 directories, 30 files

The distribution of data

zhangyue61@ZBMac-C02FN40EMD6P spark-8212a35d-7784-47fc-a496-ce499753edf1 % cd /tmp/bucketA/
zhangyue61@ZBMac-C02FN40EMD6P bucketA % ll
total 0
drwxr-xr-x   5 zhangyue61  wheel   160 12 11 11:12 ./
drwxrwxrwt  70 root        wheel  2240 12 11 11:12 ../
drwxr-xr-x   3 zhangyue61  wheel    96 12 11 11:12 1103196347/
drwxr-xr-x   3 zhangyue61  wheel    96 12 11 11:12 1782571635/
drwxr-xr-x   3 zhangyue61  wheel    96 12 11 11:12 538548677/
zhangyue61@ZBMac-C02FN40EMD6P bucketA % tree -a
.
├── 1103196347
│   └── private
│       └── var
│           └── folders
│               └── ww
│                   └── dc150vl50815wqgbsx_98w9w0000gp
│                       └── T
│                           └── spark-8212a35d-7784-47fc-a496-ce499753edf1
│                               └── h1
│                                   └── dt=2021-01-06
│                                       ├── ..hoodie_partition_metadata.crc
│                                       ├── .977a6c38-41a9-4423-921b-b96ef57c853b-0_1-11-18_20241211031249856.parquet.crc
│                                       ├── .hoodie_partition_metadata
│                                       └── 977a6c38-41a9-4423-921b-b96ef57c853b-0_1-11-18_20241211031249856.parquet
├── 1782571635
│   └── private
│       └── var
│           └── folders
│               └── ww
│                   └── dc150vl50815wqgbsx_98w9w0000gp
│                       └── T
│                           └── spark-8212a35d-7784-47fc-a496-ce499753edf1
│                               └── h1
│                                   └── dt=2021-01-05
│                                       ├── ..hoodie_partition_metadata.crc
│                                       ├── .7d71eaa5-72e3-4ffd-8d90-ee09c27b0675-0_0-11-17_20241211031249856.parquet.crc
│                                       ├── .hoodie_partition_metadata
│                                       └── 7d71eaa5-72e3-4ffd-8d90-ee09c27b0675-0_0-11-17_20241211031249856.parquet
└── 538548677
    └── private
        └── var
            └── folders
                └── ww
                    └── dc150vl50815wqgbsx_98w9w0000gp
                        └── T
                            └── spark-8212a35d-7784-47fc-a496-ce499753edf1
                                └── h1
                                    └── dt=2021-01-07
                                        ├── ..hoodie_partition_metadata.crc
                                        ├── .1bd01552-e01e-4696-a7cb-6dba97dfe2b1-0_2-11-19_20241211031249856.parquet.crc
                                        ├── .hoodie_partition_metadata
                                        └── 1bd01552-e01e-4696-a7cb-6dba97dfe2b1-0_2-11-19_20241211031249856.parquet

31 directories, 12 files

Base Path

/private/var/folders/ww/dc150vl50815wqgbsx_98w9w0000gp/T/spark-8212a35d-7784-47fc-a496-ce499753edf1/h1/.hoodie

Data Path

/tmp/bucketA/1103196347/private/var/folders/ww/dc150vl50815wqgbsx_98w9w0000gp/T/spark-8212a35d-7784-47fc-a496-ce499753edf1/h1/dt=2021-01-06/977a6c38-41a9-4423-921b-b96ef57c853b-0_1-11-18_20241211031249856.parquet 

Impact

no

Risk level (write none, low medium or high below)

low

Documentation Update

Describe any necessary documentation update if there is any new feature, config, or user-facing change. If not, put "none".

  • The config description must be updated if new configs are added or the default value of the configs are changed
  • Any new feature or user-facing change requires updating the Hudi website. Please create a Jira ticket, attach the
    ticket number here and follow the instruction to make
    changes to the website.

Contributor's checklist

  • Read through contributor's guide
  • Change Logs and Impact were stated clearly
  • Adequate tests were added if applicable
  • CI passed

@github-actions github-actions bot added the size:M PR with lines of changes in (100, 300] label Dec 11, 2024
public static final String FILE_ID_KEY = "hoodie_file_id";
public static final String TABLE_BASE_PATH = "hoodie_table_base_path";
public static final String TABLE_NAME = "hoodie_table_name";
public static final String TABLE_STORAGE_PATH = "hoodie_storage_path";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should reuse existing config names

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure thing, just for poc quick test

return FSUtils.makeWriteToken(getPartitionId(), getStageId(), getAttemptId());
}

protected StoragePath getPartitionPath(String partitionPath) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: we can change the method name to toPhysicalPath

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make sense!

HoodiePartitionMetadata partitionMetadata = new HoodiePartitionMetadata(storage, instantTime,
new StoragePath(config.getBasePath()),
FSUtils.constructAbsolutePath(config.getBasePath(), partitionPath),
new StoragePath(config.getBasePath()), getPartitionPath(partitionPath),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need to store physical path in partition metadata?


public StoragePath makeNewPath(String partitionPath) {
StoragePath path = FSUtils.constructAbsolutePath(config.getBasePath(), partitionPath);
StoragePath path = getPartitionPath(partitionPath);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume this change is just for the POC? Ideally the conversion from logical path to physical path should happen within HoodieStorage at L134

List<Pair<String, StoragePath>> absolutePartitionPathList = partitionSet.stream()
.map(partition -> Pair.of(
partition, FSUtils.constructAbsolutePath(metaClient.getBasePath(), partition)))
partition, storage.getAllLocations(partition, config).stream().findFirst().get()))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious, would findFirst() work here even in the production?

HoodieStorage storage = dataMetaClient.getStorage();
String tableName = dataMetaClient.getTableConfig().getTableName();
StoragePath dataBasePath = dataMetaClient.getBasePath();
long blockSize = storage.getDefaultBlockSize(partitionPath);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would this line leverage storage strategy?

@vinothchandar vinothchandar added the rfc Request for comments label Feb 19, 2025
@yihua yihua self-assigned this Feb 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

rfc Request for comments size:M PR with lines of changes in (100, 300]

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants