Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DNM/PoC]: use lsn leases to temporarily block gc #7996

Closed
wants to merge 14 commits into from

Conversation

yliang412
Copy link
Contributor

@yliang412 yliang412 commented Jun 7, 2024

Part of

Problem

This pull request is a PoC opening for early reviews and comments. The feedbacks will incorporate into later PRs.

With the LSN lease API introduced in #7808, we want to implement the real lease logic to prevent GC from proceeding pass some LSN when garbage collecting layers in a timeline.

Design doc at https://www.notion.so/neondatabase/LSN-Lease-Design-f8aa8333a9b7431d9905785ba7599745?pvs=4.

Summary of changes

  • Maintain a in-memory mapping of LSN to lease expiration time in GCInfo.
  • Change leasing period to 10 min (based on discussion here to make the lease heartbeat rare, still seeking feedback)
  • Remove expired lease from the map when refreshing GCInfo.
  • Block GC (similar to how we uses retain_lsns).
  • A unit test that demonstrates the concept.
  • Delay GC by lease period at startup.

Checklist before requesting a review

  • I have performed a self-review of my code.
  • If it is a core feature, I have added thorough tests.
  • Do we need to implement analytics? if so did you add the relevant metrics to the dashboard?
  • If this PR requires public announcement, mark it with /release-notes label and add several sentences in this section.

Checklist before merging

  • Do not forget to reformat commit message to not include the above checklist

Copy link

github-actions bot commented Jun 7, 2024

No tests were run or test report is not available

Test coverage report is not available

The comment gets automatically updated with the latest test results
db378d1 at 2024-06-17T19:19:33.110Z :recycle:

@yliang412 yliang412 self-assigned this Jun 10, 2024
// Gets the maximum LSN that holds the valid lease.
// Caveat: This value could be stale since we rely on refresh_gc_info to invalidate leases,
// so there could be leases invalidated between the refresh and here.
let max_lsn_with_valid_lease = gc_info.leases.last_key_value().map(|(lsn, _)| *lsn);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Current implementation would keep everything above the lease. Would be better if this is handled in the same way as retain_lsn so that we only keep necessary images? I forgot where our discussion last time ends up in.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm actually trying to handle it the same way as what retain_lsns does right now. The way current GC uses retain_lsns is that for each layer it loops through the retain_lsn in retain_lsns, and keeping the layer if layer.get_lsn_range().start <= retain_lsn.

In my case, I'm doing layer.get_lsn_range().start <= maximum_lsn_with_valid_lease (If a layer's lsn_range overlaps with the maximum_lsn_with_valid_lease, then all other LSNs with valid leases will do as well).

// 3. Is it needed by a child branch?
// NOTE With that we would keep data that
// might be referenced by child branches forever.
// We can track this in child timeline GC and delete parent layers when
// they are no longer needed. This might be complicated with long inheritance chains.
//
// TODO Vec is not a great choice for `retain_lsns`
for retain_lsn in &retain_lsns {
// start_lsn is inclusive
if &l.get_lsn_range().start <= retain_lsn {
debug!(
"keeping {} because it's still might be referenced by child branch forked at {} is_dropped: xx is_incremental: {}",
l.layer_name(),
retain_lsn,
l.is_incremental(),
);
result.layers_needed_by_branches += 1;
continue 'outer;
}
}
// 3.5 Is there a valid lease that requires us to keep this layer?
if let Some(lsn) = &max_lsn_with_valid_lease {
if &l.get_lsn_range().start <= lsn {
debug!(
"keeping {} because there is a valid lease preventing GC at {}",
l.layer_name(),
lsn,
);
result.layers_needed_by_leases += 1;
continue 'outer;
}
}

Maybe I miss the part where retain_lsns only keep necessary images? One way we could do that is to maybe check the end of the lsn range as well?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay seems like retain_lsn did not optimize for branches (it keeps everything above the oldest branch). Would be a future optimization.

Signed-off-by: Yuchen Liang <yuchen@neon.tech>
Signed-off-by: Yuchen Liang <yuchen@neon.tech>
Signed-off-by: Yuchen Liang <yuchen@neon.tech>
Signed-off-by: Yuchen Liang <yuchen@neon.tech>
Signed-off-by: Yuchen Liang <yuchen@neon.tech>
@yliang412 yliang412 changed the title [WIP][DNM/Poc]: use lsn leases to temporarily block gc [DNM/PoC]: use lsn leases to temporarily block gc Jun 13, 2024
Comment on lines 5096 to 5097
if let Some(lsn) = &min_lsn_with_valid_lease {
if &l.get_lsn_range().start <= lsn {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

layer.get_lsn_range().start <= min_lsn_with_valid_lease <= all other lsn with valid leases.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comparison should actually be against max_lsn_with_valid_lease since we keep a layer if the layer start is <= any of the lease.

@yliang412 yliang412 marked this pull request as ready for review June 13, 2024 13:11
@yliang412 yliang412 requested a review from a team as a code owner June 13, 2024 13:11
@yliang412 yliang412 requested review from koivunej and removed request for koivunej June 13, 2024 13:11
Copy link
Member

@koivunej koivunej left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good, few smaller comments.

pageserver/src/tenant.rs Outdated Show resolved Hide resolved
pageserver/src/tenant/timeline.rs Show resolved Hide resolved
Signed-off-by: Yuchen Liang <yuchen@neon.tech>
Signed-off-by: Yuchen Liang <yuchen@neon.tech>
Copy link
Member

@skyzh skyzh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Could have some e2e test once the compute side of this pull request gets merged. Besides the "one compute should hold one lease" problem I mentioned in the review comments, I also prefer that at some point in the future gc_timeline should take all leases instead of only the min lease. In theory, we only need to keep the image layers for each of the lease/branch LSN, as an future optimization.

pageserver/src/tenant.rs Outdated Show resolved Hide resolved

{
let mut gc_info = self.gc_info.write().unwrap();
gc_info.leases.insert(lsn, lease.clone());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel there is something not optimal of the lease design. Imagine a compute node updates the lease every second and the default lease length is 300s, we will easily accumulate 300 leases in memory. I feel a better design is to have the lease request come with <compute_unique_id, lsn>, so that one compute maintains exactly one lease in the system.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

something to be fixed in the future, could create an issue

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For this I was thinking that compute will basically consume the valid_until expiration time and renew the lease a little bit before that?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well, it depends on the cplane people, but I do feel usually heartbeats are designed in a way that the interval is 1/10 of the expiration interval.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We keep the leases in a map of LSN->lease, right? A given compute will keep requesting the same LSN repeatedly, so the overall number of leases we track shouldn't increase.

@@ -5010,6 +5036,10 @@ impl Timeline {
// 1. it is older than cutoff LSN;
// 2. it is older than PITR interval;
// 3. it doesn't need to be retained for 'retain_lsns';

// TODO(yuchen): we could consider current retain_lsns as infinite leases.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

feel free to create a tracking issue if this is something you plan to work on in short term

yliang412 and others added 2 commits June 14, 2024 15:03
yliang412 added a commit that referenced this pull request Jun 18, 2024
Part of #7497, extracts from #7996, closes #8063.

## Problem

With the LSN lease API introduced in
#7808, we want to implement
the real lease logic so that GC will
keep all the layers needed to reconstruct all pages at all the leased
LSNs with valid leases at a given time.

Signed-off-by: Yuchen Liang <yuchen@neon.tech>
@yliang412
Copy link
Contributor Author

The code changes and comments in this PR have been addressed in #8084. Closing it for now.

@yliang412 yliang412 closed this Jul 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants