Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-47036][SS][3.5] Cleanup RocksDB file tracking for previously uploaded files if files were deleted from local directory #45206

Closed
wants to merge 1 commit into from

Conversation

sahnib
Copy link
Contributor

@sahnib sahnib commented Feb 21, 2024

Backports PR #45092 to Spark 3.5

What changes were proposed in this pull request?

This change cleans up any dangling files tracked as being previously uploaded if they were cleaned up from the filesystem. The cleaning can happen due to a compaction racing in parallel with commit, where compaction completes after commit and a older version is loaded on the same executor.

Why are the changes needed?

The changes are needed to prevent RocksDB versionId mismatch errors (which require users to clean the checkpoint directory and retry the query).

A particular scenario where this can happen is provided below:

  1. Version V1 is loaded on executor A, RocksDB State Store has 195.sst, 196.sst, 197.sst and 198.sst files.
  2. State changes are made, which result in creation of a new table file 200.sst.
  3. State store is committed as version V2. The SST file 200.sst (as 000200-8c80161a-bc23-4e3b-b175-cffe38e427c7.sst) is uploaded to DFS, and previous 4 files are reused. A new metadata file is created to track the exact SST files with unique IDs, and uploaded with RocksDB Manifest as part of V1.zip.
  4. Rocks DB compaction is triggered at the same time. The compaction creates a new L1 file (201.sst), and deletes existing 5 SST files.
  5. Spark Stage is retried.
  6. Version V1 is reloaded on the same executor. The local files are inspected, and 201.sst is deleted. The 4 SST files in version V1 are downloaded again to local file system.
  7. Any local files which are deleted (as part of version load) are also removed from local → DFS file upload tracking. However, the files already deleted as a result of compaction are not removed from tracking. This is the bug which resulted in the failure.
  8. State store is committed as version V1. However, the local mapping of SST files to DFS file path still has 200.sst in its tracking, hence the SST file is not re-uploaded. A new metadata file is created to track the exact SST files with unique IDs, and uploaded with the new RocksDB Manifest as part of V2.zip. (The V2.zip file is overwritten here atomically)
  9. A new executor tried to load version V2. However, the SST files in (1) are now incompatible with Manifest file in (6) resulting in the version Id mismatch failure.

Does this PR introduce any user-facing change?

No

How was this patch tested?

Added unit test cases to cover the scenario where some files were deleted on the file system.

The test case fails with the existing master with error Mismatch in unique ID on table file 16, and is successful with changes in this PR.

Was this patch authored or co-authored using generative AI tooling?

No

…ed files if files were deleted from local directory

This change cleans up any dangling files tracked as being previously uploaded if they were cleaned up from the filesystem. The cleaning can happen due to a compaction racing in parallel with commit, where compaction completes after commit and a older version is loaded on the same executor.

The changes are needed to prevent RocksDB versionId mismatch errors (which require users to clean the checkpoint directory and retry the query).

A particular scenario where this can happen is provided below:

1. Version V1 is loaded on executor A, RocksDB State Store has 195.sst, 196.sst, 197.sst and 198.sst files.
2. State changes are made, which result in creation of a new table file 200.sst.
3. State store is committed as version V2. The SST file 200.sst (as 000200-8c80161a-bc23-4e3b-b175-cffe38e427c7.sst) is uploaded to DFS, and previous 4 files are reused. A new metadata file is created to track the exact SST files with unique IDs, and uploaded with RocksDB Manifest as part of V1.zip.
4. Rocks DB compaction is triggered at the same time. The compaction creates a new L1 file (201.sst), and deletes existing 5 SST files.
5. Spark Stage is retried.
6. Version V1 is reloaded on the same executor. The local files are inspected, and 201.sst is deleted. The 4 SST files in version V1 are downloaded again to local file system.
7. Any local files which are deleted (as part of version load) are also removed from local → DFS file upload tracking. **However, the files already deleted as a result of compaction are not removed from tracking. This is the bug which resulted in the failure.**
8. State store is committed as version V1. However, the local mapping of SST files to DFS file path still has 200.sst in its tracking, hence the SST file is not re-uploaded. A new metadata file is created to track the exact SST files with unique IDs, and uploaded with the new RocksDB Manifest as part of V2.zip. (The V2.zip file is overwritten here atomically)
9. A new executor tried to load version V2. However, the SST files in (1) are now incompatible with Manifest file in (6) resulting in the version Id mismatch failure.

No

Added unit test cases to cover the scenario where some files were deleted on the file system.

The test case fails with the existing master with error `Mismatch in unique ID on table file 16`, and is successful with changes in this PR.

No

Closes apache#45092 from sahnib/rocksdb-compaction-file-tracking-fix.

Authored-by: Bhuwan Sahni <bhuwan.sahni@databricks.com>
Signed-off-by: Jungtaek Lim <kabhwan.opensource@gmail.com>
@HeartSaVioR HeartSaVioR changed the title [Backport][Spark-3.5][SPARK-47036][SS] Cleanup RocksDB file tracking for previously uploaded files if files were deleted from local directory [SPARK-47036][SS][3.5] Cleanup RocksDB file tracking for previously uploaded files if files were deleted from local directory Feb 21, 2024
Copy link
Contributor

@HeartSaVioR HeartSaVioR left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 pending CI.

@HeartSaVioR
Copy link
Contributor

Thanks! Merging to 3.5.

HeartSaVioR pushed a commit that referenced this pull request Feb 22, 2024
…ploaded files if files were deleted from local directory

Backports PR #45092 to Spark 3.5

### What changes were proposed in this pull request?

This change cleans up any dangling files tracked as being previously uploaded if they were cleaned up from the filesystem. The cleaning can happen due to a compaction racing in parallel with commit, where compaction completes after commit and a older version is loaded on the same executor.

### Why are the changes needed?

The changes are needed to prevent RocksDB versionId mismatch errors (which require users to clean the checkpoint directory and retry the query).

A particular scenario where this can happen is provided below:

1. Version V1 is loaded on executor A, RocksDB State Store has 195.sst, 196.sst, 197.sst and 198.sst files.
2. State changes are made, which result in creation of a new table file 200.sst.
3. State store is committed as version V2. The SST file 200.sst (as 000200-8c80161a-bc23-4e3b-b175-cffe38e427c7.sst) is uploaded to DFS, and previous 4 files are reused. A new metadata file is created to track the exact SST files with unique IDs, and uploaded with RocksDB Manifest as part of V1.zip.
4. Rocks DB compaction is triggered at the same time. The compaction creates a new L1 file (201.sst), and deletes existing 5 SST files.
5. Spark Stage is retried.
6. Version V1 is reloaded on the same executor. The local files are inspected, and 201.sst is deleted. The 4 SST files in version V1 are downloaded again to local file system.
7. Any local files which are deleted (as part of version load) are also removed from local → DFS file upload tracking. **However, the files already deleted as a result of compaction are not removed from tracking. This is the bug which resulted in the failure.**
8. State store is committed as version V1. However, the local mapping of SST files to DFS file path still has 200.sst in its tracking, hence the SST file is not re-uploaded. A new metadata file is created to track the exact SST files with unique IDs, and uploaded with the new RocksDB Manifest as part of V2.zip. (The V2.zip file is overwritten here atomically)
9. A new executor tried to load version V2. However, the SST files in (1) are now incompatible with Manifest file in (6) resulting in the version Id mismatch failure.

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Added unit test cases to cover the scenario where some files were deleted on the file system.

The test case fails with the existing master with error `Mismatch in unique ID on table file 16`, and is successful with changes in this PR.

### Was this patch authored or co-authored using generative AI tooling?

No

Closes #45206 from sahnib/spark-3.5-rocks-db-fix.

Authored-by: Bhuwan Sahni <bhuwan.sahni@databricks.com>
Signed-off-by: Jungtaek Lim <kabhwan.opensource@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants