SecretStore: PoA integration initial version #7101
Conversation
Thanks for such a detailed review, @tomusdrw !!! That's actually what I'm missing when working on SecretStore! |
I see some responses came, while I was adding comments - taking a time to fix these |
This reverts commit 6efca88.
/// Get hash of the last block with at least n confirmations. | ||
fn get_confirmed_block_hash(client: &Client, confirmations: u64) -> Option<H256> { | ||
client.block_number(BlockId::Latest) | ||
.and_then(|b| b.checked_sub(confirmations)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's an edge case, but I would say that for b < confirmations
we should just return 0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed. Just curious - the only reason is to process requests from block#0 asap (instead of waiting for 3 blocks)? Or is there some other problem with previous pattern?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. I think that getting None
from this method would indicate some errornous state (non existing historic block?) and it wouldn't be exactyl the same semantics for block 0
.
self.data.contract.update(); | ||
if !self.data.contract.is_actual() { | ||
let enacted_len = enacted.len(); | ||
if enacted_len == 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
enacted.is_empty()
?
EDIT: just noticed you are using enacted_len
later on, so I guess it's fine.
Based on https://github.com/paritytech/contracts/pull/91
What is in this PR:
4.1) listens for
ServerKeyRequested
event && checks if this request should be processed by this KeyServer. It is because of SS specific - there must be a 'master node' in each session (including generation session) => we can't just start session on all nodes at once. Basically, all possibleH256::max_value()
server_key_id values are evenly distributed among all valid KeyServers. Note: it doesn't matter who actually is a master node - it is just to beat SS protocol caveats.4.2) when appropriate key is requested, node starts key generation. If everything is OK, key pair is generated && all nodes know a Public of this KeyPair. Every KeyServer signs this Public with its own key && sends tx back to the contract. When enough (threshold + 1) conformations are received,
ServerKeyGenerated
event is fired by the contract.4.3) if generation fails, there's no much we can do. Possible options I've considered:
4.3.1) admit that we can't generate the key && send fee back to the requester;
4.3.2) admit that we can't generate the key, but do not return fee (we actually have made some actions);
4.3.3) retry until success (optimistically)
Of these options I've selected the last one, because there's actually no error can occur because of requester - either there are some problems with SS configuration/nodes, or we're trying to start session during maintenance interval (like when
ServersSetChange
session is active).4.4) retry occurs every N (30) blocks && only M (1) 2-nd time failures are allowed during this retry. If M failures has occured => we stop retrying until next 30 blocks are mined.
What is not in this PR (but still about SS+Kovan integration):
ServersSetChange
session must be automatically started (next step)service_validators_fixed.sol
contract. Have failed to configure PoA, based on validators set contract. This shouldn't affect Rust code, though (as I see it now)