Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a OnManualFlushScheduled callback in event listener #12631

Closed

Conversation

jowlyzhang
Copy link
Contributor

@jowlyzhang jowlyzhang commented May 9, 2024

As titled. Also added the newest user-defined timestamp into the MemTableInfo. This can be a useful info in the callback.

Added some unit tests as examples for how users can use two separate approaches to allow manual flush / manual compactions to go through when the user-defined timestamps in memtable only feature is enabled. One approach relies on selectively increase cutoff timestamp in OnMemtableSeal callback when it's initiated by a manual flush. Another approach is to increase cutoff timestamp in OnManualFlushScheduled callback. The caveats of the approaches are also documented in the unit test.

@jowlyzhang jowlyzhang marked this pull request as draft May 9, 2024 16:53
@jowlyzhang jowlyzhang changed the title Add newest user-defined timestamp to MemTableInfo Add a OnManualFlushScheduled callback in event listener May 9, 2024
@jowlyzhang jowlyzhang marked this pull request as ready for review May 9, 2024 20:23
@jowlyzhang jowlyzhang requested a review from ajkr May 9, 2024 20:23
@@ -2426,6 +2443,8 @@ Status DBImpl::FlushMemTable(ColumnFamilyData* cfd,
}
}
}

NotifyOnManualFlushScheduled({cfd}, flush_reason);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The FlushRequest generated by FlushMemTable() specifies std::numeric_limits<uint64_t>::max() as the max memtable ID to persist. Is it possible the background flush includes memtables newer than the one in memtable_ids_to_wait? That would mean increasing the cutoff TS here would not guarantee the flush can happen.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good question. Yes, if another memtable is sealed in between the IncreaseFullHistoryTsLow here and the time background flush checks for whether timestamps expired. It could mean that the cutoff TS here is not sufficient to guarantee the flush can happen.

If that sealed memtable is caused by another manual flush type of event, this call back should have also be invoked to increase the cutoff TS to a higher point. If that sealed memtable is caused by regular writes filling up a memtable, this would be an issue, like when the write rate is very high.

I think updating the memtable id in FlushRequest to be GetLatestMemTableID instead of std::numeric_limits<uint64_t>::max() can help make sure the flush can still proceed in this case. Do you have any concerns for making this change?

Copy link
Contributor

@ajkr ajkr Jun 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think updating the memtable id in FlushRequest to be GetLatestMemTableID instead of std::numeric_limits<uint64_t>::max() can help make sure the flush can still proceed in this case. Do you have any concerns for making this change?

Sorry for the delay. It's hard to say. I think that bounding the flushed memtable ID was introduced for atomic_flush. We didn't use it everywhere because it's more efficient (for write-amp, at least) to greedily pick as many memtables as possible at flush-time. That can make a difference when the flush queue is long, which is rare so it's a minor optimization. Also, one could argue that foreground flush latency is more important than write-amp in case of manual flush. So, introducing the limit is fine with me.

Still, would it be enough? There could be a case where manual flush does not generate any flush request because there is already one queued for automatic flush. If that one fails or is postponed, do we add a new flush request with unbounded memtable ID?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the detailed context on this optimization. At flush job creation time, it will check for the latest memtable id again and use that to pick memtables to flush:

uint64_t max_memtable_id =

So I think even if we update this manual FlushRequest to use the latest memtable id, we will still have the optimization you mentioned.

If this manual flush's enqueuing effort didn't succeed because another auto flush request is already enqueued, since those request are generated with GenerateFlushRequest

void DBImpl::GenerateFlushRequest(const autovector<ColumnFamilyData*>& cfds,
, that already enqueued request should have a memtable id that is equal to or smaller than the current latest memtable id. In theory, if the new cutoff timestamp is high enough to let the current latest memtable id proceed, that request can proceed too.

If the automatic flush fails, presumably that would trigger the error recovery flush, which goes through this manual flush path and enqueue another request. The RetryFlushesForErrorRecovery path does not go through this path, I think I should add invoking this callback in that path too.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be hard to enforce the memtable picking is non-greedy. Alternatively you could add a post-wait callback. Then the user can:

  • In the post-schedule, pre-wait callback:
    • IncreaseFullHistoryTsLow()
    • Bump an "in manual flush" counter
  • In the post-wait callback
    • Decrement an "in manual flush" counter
  • In the seal callback:
    • If the "in manual flush" counter is nonzero, call IncreaseFullHistoryTsLow()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this idea that uses a combination of these callbacks. Let me do this in a follow up to implement such a flow as an example to handle this edge case.

Copy link
Contributor

@ajkr ajkr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

I realized the concern we're discussing is more about the callback usage than the callback itself (this PR). I think the callback itself looks good. If we can add stronger guarantees to make it more useful, that's even better.

@@ -2426,6 +2443,8 @@ Status DBImpl::FlushMemTable(ColumnFamilyData* cfd,
}
}
}

NotifyOnManualFlushScheduled({cfd}, flush_reason);
Copy link
Contributor

@ajkr ajkr Jun 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think updating the memtable id in FlushRequest to be GetLatestMemTableID instead of std::numeric_limits<uint64_t>::max() can help make sure the flush can still proceed in this case. Do you have any concerns for making this change?

Sorry for the delay. It's hard to say. I think that bounding the flushed memtable ID was introduced for atomic_flush. We didn't use it everywhere because it's more efficient (for write-amp, at least) to greedily pick as many memtables as possible at flush-time. That can make a difference when the flush queue is long, which is rare so it's a minor optimization. Also, one could argue that foreground flush latency is more important than write-amp in case of manual flush. So, introducing the limit is fine with me.

Still, would it be enough? There could be a case where manual flush does not generate any flush request because there is already one queued for automatic flush. If that one fails or is postponed, do we add a new flush request with unbounded memtable ID?

@facebook-github-bot
Copy link
Contributor

@jowlyzhang has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@jowlyzhang merged this pull request in 44aceb8.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants