Conversation
* configs (argonne-lcf#284) * docker: improve docker cache and remove sources (argonne-lcf#287) * Fixes for v2.0 benchmark (argonne-lcf#289) * Reorganized the code provided by YardenMa for O_DIRECT support with NPY and NPZ formats and pytorch (argonne-lcf#286) * Reorganized the code provided by YardenMa from PR argonne-lcf#250 to follow the recommentations in argonne-lcf#250 (review) * Move parse_npy and parse_npz back out of NPZReaderODIRECT and NPYReaderODirect class due to copy error Fix some spelling errors in config.rst * Updated NPZReaderODirect to be a derivative class of NPYReaderODirect to reduce code duplication and move the logic of accessing the dictionary key "x" for npz into NPZReaderODIRECT from ReaderFactory. Updated NPZReaderODirect and NPYReaderODirect to remove the need to pass in a paser function and make referencing them in ReaderFactory similar to the other formats. Update ci.yml to add a test for reader.odirect parameter under test-torch-loader-npz * RAM optimisations for checkpointing (argonne-lcf#283) * Add KSM optimization (checkpoint) Reduce memory usage by adding the MADVISE flag to tensor pages and pausing for an arbitrary duration during initialization. If KSM is enabled, this allows it time to coalesce virtual pages into shared physical pages, resulting in significant reductions in RAM watermark usage for write checkpoint operations. This optimization does not apply to read operations. * Reduce RAM Usage for checkpointing (zero1) With zero1, only the first DP saves the model parameters. There is no need to allocate these parameters on other DP. The original RAM formula was: ram = ((model_size/(TP*PP))+(optimizer_size/(TP*PP*DP)))*(TP*PP*DP) With the patch, this simplifies to: ram = model_size + optimizer_size Which is equal to zero3 RAM usage. The reduction becomes more significant as DP increases. For example, a 70B model with 8TP, 4PP, and 4DP requires 1300GB without the patch, but only 910GB with it. This is still not equivalent to zero3, since only the first DP saves the model parameters. * Move KSM config under checkpoint.ksm subtree * Update config.rst for nested KSM checkpoint options * Correct nested KSM config parsing and add test * Add KSM checkpoint test in CI * Chore --------- Co-authored-by: Boris Glimcher <36732377+glimchb@users.noreply.github.com> Co-authored-by: Johnu George <johnugeorge109@gmail.com> Co-authored-by: Timothy Chau <162626440+timohty-chau@users.noreply.github.com> Co-authored-by: LouisDDN <77112282+LouisDDN@users.noreply.github.com>
- Add PARQUET to FormatType enum - Add SNAPPY to Compression enum for parquet default compression - Create ParquetReader using PyArrow for reading parquet files - Create ParquetGenerator using PyArrow for generating synthetic parquet data - Register parquet in reader and generator factories - Default compression: snappy (most common for parquet)
- Add LZ4='lz4' and ZSTD='zstd' values to Compression enum - Positioned before SNAPPY, after XZ
- Add parquet_columns, parquet_row_group_size, parquet_read_mode, parquet_partition_by fields - Add LoadConfig parsing for dataset.parquet nested YAML section - Uses OmegaConf.to_container() for DictConfig handling
…alidation - Column-filtered reads via parquet_columns config - Schema validation on open raises ValueError for missing columns - Default and row_group read modes with memory_map=True - Backward compatible when parquet_columns is empty
- Config-driven multi-dtype schema (float32, float64, string, binary, bool) - Full compression support: None, Snappy, GZIP, LZ4, ZSTD - Configurable row_group_size parameter - Optional Hive-style partitioning via partition_by - Backward compatible single 'data' column when config is empty
Cast column spec values to native Python types (str, int) to avoid TypeError when Hydra's DictConfig objects are passed to PyArrow.
…and improve read performance.
… scalar size=1 path and default size=1
…into parquet_support
idevasena
left a comment
There was a problem hiding this comment.
I pulled Wes’s PR branch for DLIO_local_changes repository and picked one of the dlrm yaml for datagen from storage repo large PR from Wes i.e. this file: https://github.com/wvaske/mlperf-storage/blob/24192d19417ba42d1afdc6d0c0c1a2fd1da32d2d/configs/dlio/workload/dlrm_datagen.yaml.
The static and verification tests pass as from below logs, the parquet format override as well works well with dlrm workload config (above yaml). I am using below command to test:
(dlio-parquet-test) ssgroot@test82:~/PR_verify_mlperf/DLIO_local_changes$ mpirun -np 1 dlio_benchmark workload=dlrm_datagen
Generated data:
Static Verification with resolution of imports:
ParquetGenerator standalone unit test
Approving this PR based off of above testing.
No description provided.