Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

re-assess severity of duplicate layers: nowadays it cannot happen & we should panic/abort() early if they do #7790

Open
problame opened this issue May 17, 2024 · 1 comment · May be fixed by #7799
Labels
c/storage/pageserver Component: storage: pageserver

Comments

@problame
Copy link
Contributor

problame commented May 17, 2024

Background

From the time before always-authoritative index_part.json, we had to handle duplicate layers. See the RFC for an illustration of how duplicate layers could happen:

The implications of the above are primarily problematic for compaction.
Specifically, the part of it that compacts L0 layers into L1 layers.
Remember that compaction takes a set of L0 layers and reshuffles the delta records in them into L1 layer files.
Once the L1 layer files are written to disk, it atomically removes the L0 layers from the layer map and adds the L1 layers to the layer map.
It then deletes the L0 layers locally, and schedules an upload of the L1 layers and and updated index part.
If we crash before deleting L0s, but after writing out L1s, the next compaction after restart will re-digest the L0s and produce new L1s.
This means the compaction after restart will **overwrite** the previously written L1s.
Currently we also schedule an S3 upload of the overwritten L1.

As of #5198 , we should not be exposed to that problem anymore.

Problem 1

But, we still have

  1. code in Pageserver than handles duplicate layers
  2. tests in the test suite that demonstrates the problem using a failpoint

However, the test in the test suite doesn't use the failpoint to induce a crash that could legitimately happen in production.
What is does instead is to return early with an Ok(), so that the code in Pageserver that handles duplicate layers (item 1) actually gets exercised.

That "return early" would be a bug in the routine if it happened in production.
So, the tests in the test suite are tests for their own sake, but don't serve to actually regress-test any production behavior.

Problem 2

Further, if production code did (it nowawdays doesn't!) create a duplicate layer, I think the code in Pageserver that handles that condition (item 1 above) is too little too late:

  • the code handles it by discarding the newer struct Layer
  • however, on disk, we have already overwritten the old with the new layer file
  • the fact that we do it atomically doesn't matter because ...
  • if the new layer file is not bit-identical, then we have a cache coherency problem
    • PS PageCache block cache: caches old bit battern
    • blob_io offsets stored in variables, based on pre-overwrite bit pattern / offsets
      • => reading based on these offsets from the new file might yield different data than before

Soution

  • Remove the test suite code
  • Remove the Pageserver code that handles duplicate layers too late
  • Add a panic/abort in the Pageserver code for when we'd overwrite a layer
    • Use RENAME_NOREPLACE to detect this correctly

Concern originally raised in #7707 (comment)

@problame problame added the c/storage/pageserver Component: storage: pageserver label May 17, 2024
@problame problame self-assigned this May 17, 2024
@problame problame removed their assignment May 17, 2024
@problame problame changed the title re-assess severity of duplicate layers re-assess severity of duplicate layers: nowadays the cannot happen & we should panic/abort() early if they do May 17, 2024
@problame problame changed the title re-assess severity of duplicate layers: nowadays the cannot happen & we should panic/abort() early if they do re-assess severity of duplicate layers: nowadays it cannot happen & we should panic/abort() early if they do May 17, 2024
problame added a commit that referenced this issue May 17, 2024
@koivunej
Copy link
Contributor

The analysis is based on the early exit which tests that compaction doesn't go into a loop where L0 compaction returns the same L0 in

def test_duplicate_layers(neon_env_builder: NeonEnvBuilder, pg_bin: PgBin):

I've since added a test case which actually tests the known duplication situation experienced with test_pageserver_chaos:

def test_actually_duplicated_l1(neon_env_builder: NeonEnvBuilder, pg_bin: PgBin):

Your PR removes both test cases, but this issue only discusses the first one.

I do agree with problem 2 and the lateness. However the known duplicated situation with test_pageserver_chaos seemed to require a restart or has not been ran into otherwise (the runtime panic). Agreed we could read bad data since the actual switch on disk would had happened, but at least we would not have uploaded it to s3. Current setup was left to create noise (panic) in case we ever ran into this problem, and we haven't, before the tiered compaction work.

RENAME_NOREPLACE

I was thinking of link + unlink earlier on internal slack thread but yes, this would be simpler (I assume you did it via nix, link + unlink would have been via std). I am still however unconvinced we want to abort; I'd just add this hardening, keep the test demonstrating this behavior and fix the bug in tiered compaction. I guess I need to do a competing PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
c/storage/pageserver Component: storage: pageserver
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants