Skip to content

Parquet support#1

Merged
idevasena merged 12 commits intomlcommons:mainfrom
wvaske:parquet_support
Mar 4, 2026
Merged

Parquet support#1
idevasena merged 12 commits intomlcommons:mainfrom
wvaske:parquet_support

Conversation

@wvaske
Copy link
Copy Markdown
Member

@wvaske wvaske commented Mar 3, 2026

No description provided.

zhenghh04 and others added 12 commits May 15, 2025 11:33
* configs (argonne-lcf#284)

* docker: improve docker cache and remove sources (argonne-lcf#287)

* Fixes for v2.0 benchmark (argonne-lcf#289)

* Reorganized the code provided by YardenMa for O_DIRECT support with NPY and NPZ formats and pytorch (argonne-lcf#286)

* Reorganized the code provided by YardenMa from PR argonne-lcf#250 to follow the recommentations in argonne-lcf#250 (review)

* Move parse_npy and parse_npz back out of NPZReaderODIRECT  and NPYReaderODirect class due to copy error

Fix some spelling errors in config.rst

* Updated NPZReaderODirect to be a derivative class of NPYReaderODirect to reduce code duplication and move the logic of accessing the dictionary key "x" for npz into NPZReaderODIRECT from ReaderFactory.

Updated NPZReaderODirect and NPYReaderODirect to remove the need to pass in a paser function and make referencing them in ReaderFactory similar to the other formats.

Update ci.yml to add a test for reader.odirect parameter under test-torch-loader-npz

* RAM optimisations for checkpointing (argonne-lcf#283)

* Add KSM optimization (checkpoint)

Reduce memory usage by adding the MADVISE flag to tensor pages and
pausing for an arbitrary duration during initialization.

If KSM is enabled, this allows it time to coalesce virtual pages into
shared physical pages, resulting in significant reductions in RAM
watermark usage for write checkpoint operations. This optimization
does not apply to read operations.

* Reduce RAM Usage for checkpointing (zero1)

With zero1, only the first DP saves the model parameters.
There is no need to allocate these parameters on other DP.

The original RAM formula was:
ram = ((model_size/(TP*PP))+(optimizer_size/(TP*PP*DP)))*(TP*PP*DP)

With the patch, this simplifies to:
ram = model_size + optimizer_size
Which is equal to zero3 RAM usage.

The reduction becomes more significant as DP increases. For example, a
70B model with 8TP, 4PP, and 4DP requires 1300GB without the patch,
but only 910GB with it.

This is still not equivalent to zero3, since only the first DP saves
the model parameters.

* Move KSM config under checkpoint.ksm subtree

* Update config.rst for nested KSM checkpoint options

* Correct nested KSM config parsing and add test

* Add KSM checkpoint test in CI

* Chore

---------

Co-authored-by: Boris Glimcher <36732377+glimchb@users.noreply.github.com>
Co-authored-by: Johnu George <johnugeorge109@gmail.com>
Co-authored-by: Timothy Chau <162626440+timohty-chau@users.noreply.github.com>
Co-authored-by: LouisDDN <77112282+LouisDDN@users.noreply.github.com>
- Add PARQUET to FormatType enum
- Add SNAPPY to Compression enum for parquet default compression
- Create ParquetReader using PyArrow for reading parquet files
- Create ParquetGenerator using PyArrow for generating synthetic parquet data
- Register parquet in reader and generator factories
- Default compression: snappy (most common for parquet)
- Add LZ4='lz4' and ZSTD='zstd' values to Compression enum
- Positioned before SNAPPY, after XZ
- Add parquet_columns, parquet_row_group_size, parquet_read_mode, parquet_partition_by fields
- Add LoadConfig parsing for dataset.parquet nested YAML section
- Uses OmegaConf.to_container() for DictConfig handling
…alidation

- Column-filtered reads via parquet_columns config
- Schema validation on open raises ValueError for missing columns
- Default and row_group read modes with memory_map=True
- Backward compatible when parquet_columns is empty
- Config-driven multi-dtype schema (float32, float64, string, binary, bool)
- Full compression support: None, Snappy, GZIP, LZ4, ZSTD
- Configurable row_group_size parameter
- Optional Hive-style partitioning via partition_by
- Backward compatible single 'data' column when config is empty
Cast column spec values to native Python types (str, int) to avoid
TypeError when Hydra's DictConfig objects are passed to PyArrow.
@wvaske wvaske requested a review from a team March 3, 2026 00:50
@wvaske wvaske force-pushed the parquet_support branch from 66513b5 to acde3f9 Compare March 3, 2026 15:52
Copy link
Copy Markdown

@idevasena idevasena left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I pulled Wes’s PR branch for DLIO_local_changes repository and picked one of the dlrm yaml for datagen from storage repo large PR from Wes i.e. this file: https://github.com/wvaske/mlperf-storage/blob/24192d19417ba42d1afdc6d0c0c1a2fd1da32d2d/configs/dlio/workload/dlrm_datagen.yaml.

The static and verification tests pass as from below logs, the parquet format override as well works well with dlrm workload config (above yaml). I am using below command to test:

(dlio-parquet-test) ssgroot@test82:~/PR_verify_mlperf/DLIO_local_changes$ mpirun -np 1 dlio_benchmark workload=dlrm_datagen

Generated data:

Image

Static Verification with resolution of imports:

Image

ParquetGenerator standalone unit test

Image

Approving this PR based off of above testing.

@idevasena idevasena merged commit 7017ba2 into mlcommons:main Mar 4, 2026
FileSystemGuy pushed a commit that referenced this pull request Mar 23, 2026
FileSystemGuy pushed a commit that referenced this pull request Mar 25, 2026
FileSystemGuy pushed a commit that referenced this pull request Mar 25, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants