Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix recover ssd cache data by preserving file id map #10409

Closed
wants to merge 1 commit into from

Conversation

xiaoxmeng
Copy link
Contributor

@xiaoxmeng xiaoxmeng commented Jul 6, 2024

In Meta interactive Prestissimo workload, we found there is not much read can hit in SSD cache
after cluster upgrade or worker reboot if we rerun the same set of queries. Sometime this is
because of the non-deterministic execution order of multiple filter conditions in table scan. If the
filter rate is high, then the last few filter conditions might not be executed and the decision is
local to a split which means that filter condition execution order varies across splits. This might
cause the rerun query to read from remote storage instead of local SSD cache. But this only
cause a small portion of reads go to remote storage.

The main problem is due to the recover cache entries from checkpoint. Even though we guarantee
that the same source file name maps to the same file id but we can't guarantee that the newly
assigned file id will be mapped to the same SSD file shard. This will cause the cache space polution.
Even if we recovered all the SSD cache entries from checkpoint file, the cache lookup can't find them
if the newly assigned file id is mapped to a different SSD file shard.

To fix this problem, we add a new constructor in StringIdMap which takes both id and string to recover
the previous string id map. The caller guarantees there is no conflict for the passed id/string pairs, and
StringIdMap will throw if breaks. Correspondingly, when SSD file recovers from the checkpoint file, it
will use the persisted file id and name to rebuild the string id map.

Verified on Meta interactive cluster with SSD and a second run after cluster reboot read all the data from
local SSD. The end-to-end query latency has been reduced by half compared with the first run.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jul 6, 2024
Copy link

netlify bot commented Jul 6, 2024

Deploy Preview for meta-velox canceled.

Name Link
🔨 Latest commit 6642fc0
🔍 Latest deploy log https://app.netlify.com/sites/meta-velox/deploys/66890ea6b6ec1900084bf341

In Meta interactive Prestissimo workload, we found there is not much read can hit in SSD cache
after cluster upgrade or worker reboot if we rerun the same set of queries. Sometime this is
because of the non-deterministic execution order of multiple filter conditions in table scan. If the
filter rate is high, then the last few filter conditions might not be executed and the decision is
local to a split which means that filter condition execution order varies across splits. This might
cause the rerun query to read from remote storage instead of local SSD cache. But this only
cause a small portion of reads go to remote storage.

The main problem is due to the recover cache entries from checkpoint. Even though we guarantee
that the same source file name maps to the same file id but we can't guarantee that the newly
assigned file id will be mapped to the same SSD file shard. This will cause the cache space polution.
Even if we recovered all the SSD cache entries from checkpoint file, the cache lookup can't find them
if the newly assigned file id is mapped to a different SSD file shard.

To fix this problem, we add a new constructor in StringIdMap which takes both id and string to recover
the previous string id map. The caller guarantees there is no conflict for the passed id/string pairs, and
StringIdMap will throw if breaks. Correspondingly, when SSD file recovers from the checkpoint file, it
will use the persisted file id and name to rebuild the string id map.

Verified on Meta interactive cluster with SSD and a second run after cluster reboot read all the data from
local SSD. The end-to-end query latency has been reduced by half compared with the first run.
@facebook-github-bot
Copy link
Contributor

@xiaoxmeng has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@duanmeng duanmeng self-requested a review July 6, 2024 15:35
@facebook-github-bot
Copy link
Contributor

@xiaoxmeng merged this pull request in 770f2ad.

Copy link

Conbench analyzed the 1 benchmark run on commit 770f2ad2.

There were no benchmark performance regressions. 🎉

The full Conbench report has more details.

@xiaoxmeng xiaoxmeng deleted the cp branch August 15, 2024 15:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants