Skip to content
This repository has been archived by the owner on Feb 12, 2019. It is now read-only.

md_ops: add caching to improve validation of MDs written by revoked devices #1893

Merged
merged 4 commits into from
Nov 12, 2018

Conversation

strib
Copy link
Contributor

@strib strib commented Nov 1, 2018

In #1888, a user is doing way more work than necessary to validate a bunch of MDs written by a revoked device, during garbage collection. We can improve that a small amount of caching:

This PR:

  • Moves verification of KBFS Merkle nodes into the mdserver remote, so they can be cached more effectively.
  • Adds a limited size LRU cache for the (validated) results of FindNextMD.
  • Tracks the chain of MDs that have been validated up to a given revision (the one immediately following the revoke in the Merkle tree).

This should eliminate most of the redundant work that was happening when verifying lots of MDs written by the same revoked device, as in garbage collection.

Issue: KBFS-3581
Issue: #1888

@strib strib requested a review from a team November 1, 2018 20:14
@strib strib force-pushed the strib/KBFS-3581-better-revoked-key-verification branch from 34ff9bf to 867480e Compare November 1, 2018 20:16
Copy link
Contributor

@songgao songgao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM; just a nit, an understanding question, and a small advice.

libkbfs/md_ops.go Outdated Show resolved Hide resolved
// for TLF 1, if we have a next revision of 1000, and we've
// validated that MDs 100-1000 form a valid chain, then the map
// would contain: {1: {1000: 100}}
leafChainsValidated map[tlf.ID]map[kbfsmd.Revision]kbfsmd.Revision
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the idea that we are always checking on the revision right after a device revoke?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the idea that we are always checking on the revision right after a device revoke?

Yep. When we see something signed by a revoked key, we ask the service what revision came back right after the revoke, and always check that one.

libkbfs/md_ops.go Outdated Show resolved Hide resolved
Instead of hitting the server every single time we need to verify a
revoked key.  For example, when we have to verify lots of MDs written
by the same revoked key when doing garbage collection.

Issue: KBFS-3581
Issue: #1888
So that the cache can assume all nodes have been validated.

Issue: KBFS-3581
When verifying the MDs written by a revoked device, we have to
validate a long chain of MDs between the MD we're looking at, and the
MD just following the revocation.  if we're doing something like
garbage collection and iterating through MDs, we may have to do this
repeatedly for the "end" MD in the chain.  If that chain exceeds the
size of the in-memory MD cache, we will be going to the mdserver a lot
to download the same MDs over and over again.

Instead, introduce a simple cache that maps the "end" MD to the
earlier revision in a validated chain.  That way we can avoid all the
extra downloads when we miss the cache, and just download the MDs we
care about.

Issue: KBFS-3581
@strib strib force-pushed the strib/KBFS-3581-better-revoked-key-verification branch from 867480e to daf47f8 Compare November 12, 2018 20:49
Copy link
Contributor

@songgao songgao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@strib strib merged commit d5581c5 into master Nov 12, 2018
@strib strib deleted the strib/KBFS-3581-better-revoked-key-verification branch November 12, 2018 21:53
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants