Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make it easier to recombine keys #1311

Closed
OisinKyne opened this issue Oct 18, 2022 · 2 comments · Fixed by #1799
Closed

Make it easier to recombine keys #1311

OisinKyne opened this issue Oct 18, 2022 · 2 comments · Fixed by #1799
Assignees
Labels
enhancement New feature or request protocol Protocol Team tickets

Comments

@OisinKyne
Copy link
Contributor

OisinKyne commented Oct 18, 2022

Problem to be solved

Potential customers expressed their concern that they can't bail from a DV fast. This has been intentional for us so far, but in practice it makes sense to hear customers say "we need a way to safely unwind a DV and go with a normal validator if something goes wrong".

Proposed solution

We have a variety of options for implementing this, and I think they have different tradeoffs.

  • We could do something slick and automated and involve the network
  • We can do something simple and basic and offline

I am a proponent of the latter, which is basically bringing in the combine.go program into the main charon cli interface. It puts the onus on assembling the private keys in one place on the operators, and leaves nothing to the imagination with respect to this being a centralising option.

My suggestion is we add a command (charon combine)
With short description: Combines private key shares into a single private key for a DV.

And long description Combines private key shares into a single private key for a DV. Warning, running this private key in a validator alongside the original distributed validator will result in slashing.

I think the default input dir could be .. Which means that the command would scan the current folder for .charon's

Also in scope of this ticket is documenting this process in the docs site with good tree outlines of the folder hierarchy.

Could also invest in lots of good error handling to help someone get the folder structure into something that works for the command. Unhelpful file parsing errors here will be an unpleasant UX for the probably distressed user.

Example input

If we want to combine a 5 of 7, one operator must gather a threshold of .charon folders, and must put them alongside one another. This command will traverse this folder (going 1 deep to find lock files and enrs), and will combine the private keys and output them to a folder of keystores and passwords at the end if it can find everything it needs.

charon combine --data-dir=example_combine

Folder structure:

 tree -a example_combine
example_combine
├── .charon-cgvhb
│   ├── charon-enr-private-key
│   ├── cluster-lock.json
│   ├── deposit-data.json
│   └── validator_keys
│       ├── keystore-0.json
│       └── keystore-0.txt
├── .charon-dsf
│   ├── charon-enr-private-key
│   ├── cluster-definition.json
│   ├── cluster-lock.json
│   ├── deposit-data.json
│   └── validator_keys
│       ├── keystore-0.json
│       └── keystore-0.txt
├── .charon-egdfg
│   ├── charon-enr-private-key
│   ├── cluster-definition.json
│   ├── cluster-lock.json
│   ├── deposit-data.json
│   └── validator_keys
│       ├── keystore-0.json
│       └── keystore-0.txt
├── .charon-ethberlin
│   ├── charon-enr-private-key
│   ├── cluster-definition.json
│   ├── cluster-lock.json
│   ├── deposit-data.json
│   └── validator_keys
│       ├── keystore-0.json
│       └── keystore-0.txt
└── .charon-gsrsd
    ├── charon-enr-private-key
    ├── cluster-lock.json
    ├── deposit-data.json
    └── validator_keys
        ├── keystore-0.json
        └── keystore-0.txt

Example output

tree validator_keys 
validator_keys
├── keystore-0.json
└── keystore-0.txt

Out of Scope

Technically, some of the subfolders in the input need not be a full .charon folder, they could be just a /validator_keys folder and we would be able to figure it out. Happy to put this out of scope for this ticket, but in future if people get stuck and need help, we could make it such that only one of the subfolders needs to be a full .charon and the rest could be only private keys, but that is too complex for now so we'll leave it out until we really need it.

@OisinKyne OisinKyne added the enhancement New feature or request label Nov 30, 2022
@xenowits xenowits removed their assignment Jan 9, 2023
@thomasheremans thomasheremans added the protocol Protocol Team tickets label Jan 23, 2023
@gsora
Copy link
Collaborator

gsora commented Feb 10, 2023

We should come up with a way to handle many validators at once

We could have users setup their keystore files in a directory that's named after the validator's public key, at that point combine could iterate over all the directories with appropriate names and go from there.

@gsora
Copy link
Collaborator

gsora commented Feb 13, 2023

Thinking about this again, why should we allow combining multiple validators key at once?

I'd say if someone wants to recombine multiple validator keys, they will run the tool multiple times targeting it to different folders containing different keys.

Keeping it simple has historically worked good.

@gsora gsora linked a pull request Feb 16, 2023 that will close this issue
obol-bulldozer bot pushed a commit that referenced this issue Feb 22, 2023
This commit adds the `combine` command.

It accepts an input directory path containing all the private key shares of a DV, an output directory path, and a lockfile path.

While the output directory can be nonexistent (will be created automatically), input and lockfile paths must exist.

The result of this process is a keystore file containing the combined private key of the distributed validator.


category: feature
ticket: #1311
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request protocol Protocol Team tickets
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants