Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explore sync-headers-using-unsigned-transactions option in PoA bridge #26

Closed
svyatonik opened this issue Mar 5, 2020 · 3 comments · Fixed by #45
Closed

Explore sync-headers-using-unsigned-transactions option in PoA bridge #26

svyatonik opened this issue Mar 5, 2020 · 3 comments · Fixed by #45
Assignees
Labels
A-feat New feature or request

Comments

@svyatonik
Copy link
Contributor

What I learned from talks with Tomek: switching from signed transactions to unsigned would allow us to deduplicate on txpool/network (since all txs for block B will have the same hash H). This, however, brings some problems:

  1. we can't reward submitters for providing correct blocks - this could be solved by switching to reward-for-messages scheme, because to sync messages submitters would need to sync headers first;
  2. we can't submit several blocks in single transaction - the hashes would be different.

I'd like to explore this option && see:

  1. how many sync-header transactions we could verify in a sec - unsigned transactions need to be verified before they accepted to tx pool;
  2. whether using unsigned transactions would lead to slower sync (because we can't provide multiple blocks in single tx), or not.
@svyatonik svyatonik added the A-feat New feature or request label Mar 5, 2020
@svyatonik svyatonik self-assigned this Mar 5, 2020
@tomusdrw tomusdrw added this to In Progress in Rialto Release Mar 13, 2020
@svyatonik
Copy link
Contributor Author

svyatonik commented Mar 13, 2020

Upd: there are some problems with using unsigned transactions:

  1. we are validating transactions independently and unsigned tx may have only one block => we can't actually refer to parent block and can't make full validation => we may have invalid transactions in the pool;
  2. even if we have access to the parent block (i.e. transaction with parent block here), the malicious validator could generate numerous valid blocks at his slot (not necessarily relaying them to peers) and submit these unsigned transactions to the pool => spamming pool, blocks and storage. The same problem is with signed transactions, but there validator at least pays for transaction. But we still need to stop rewarding him && probably accept at least one block with given number from single submitter;
  3. the validation currently depends on validators set from the parent block. And this validator set is computed using finalization procedure. Finalization could lead to large number of storage reads => we can't use it in validate_unsigned() because it is too heavy. So we probably need to limit number of transactions in the pool somehow - i.e. we can accept the fact that the pool may have invalid transactions, but we are ok with this, because there aren't many. Or something similar

From Tomek I've learned today that transaction pool can only contain one transaction that provide same tag (i.e. if there's second transaction which provides [vec![0x42]], then it will be considered invalid? duplicate? - need to explore). Probably this could help us with ^^^ issues - needs to be explored

@tomusdrw
Copy link
Contributor

then it will be considered invalid? duplicate? - need to explore

It's just considered to "occupy the same slot", so only one transaction can exist in the pool. The replacement policy is based on the priority field. I.e. we will only accept another transaction that occupies the same slot if it has higher priority than the previous one.

@svyatonik
Copy link
Contributor Author

So we have agreed (I hope) upon following algorithm for unsigned transactions:

  1. we won't be able to give 100% guarantee that unsigned in-pool transactions are valid, however we could limit maximal number of invalid transactions in pool/block;
  2. to make these limits work, we still need a fallback mechanism - i.e. submit-using-signed-transactions;
  3. that said, here's possible approach:
    3.1) we can limit number of headers 'from the future' that we can accept into the pool - i.e. if our best block is 100 and limit is 10, then we won't accept unsigned-tx with header#111 into the pool. We do not want to ban these transactions from the pool => we need to end validation with TransactionValidity::Unknown() reason;
    3.2) we can accept at most 1 block with given number from given authority by providing tag (block_number, authority) as Tomek has suggested;
    3.3) we can accept only headers signed either by last finalized authorities set, or the next signalled. This could fail if within limit from (3.1) more that 1 authorities set has been finalized, but we may ignore this case and fallback to the signed version then;
    3.4) we can verify chain of blocks using requires/provides tags by requiring (block_number, block_hash) when verifying blocks. If parent header isn't available, then verification will miss some steps, but at least we will have contextless verification + parent link + authority signature verification (given 3.3)
  4. one other restriction that we may introduce: we know approximate block time of PoA (it is stepDuration...stepDuration * maximumEmptySteps) - what if we only accept unsigned transactions when we believe we're at the tip of PoA chain (i.e. when timestampModule.timestamp - BestKnownPoAHeader.timestamp < N * approximateBlockTime)?

That way we may accept at most BLOCKS_LIMIT * COUNT(UNION(LAST_FINALIZED_SET, NEXT_SIGNALLED_SET)) transactions into the pool. I.e. if limit is 10 and authorities sets are [a, b, c, d] + [b, d, e, f, g], then we will accept at most 10*7=70 (potentially invalid) transactions.

Rialto Release automation moved this from In Progress to Done Apr 7, 2020
@tomusdrw tomusdrw added this to the release critical milestone Apr 7, 2020
svyatonik pushed a commit that referenced this issue Jul 17, 2023
* Create binary for cumulus testing node

* Fix call to wrong method

* Fix iterator bug

* Add real networking to polkadot node

* Allow using of file chain specs

* Implement export of wasm blob and genesis state

* Add parachain id to test service export

* Remove debug logs

* Fix main.rs

* Do not use cli from polkadot-collator

* Add tracing dependency

* Improve cli structure

* Use struct instead of enum for export commands

* Add missing license header

* Fix benchmark

* add build-spec subcommand (#19)

* add build-spec subcommand

* reorder args

* Fix formatting

* Add zombienet test (#26)

* add migrated tests

* rename test files

* uncomment args and remove extra collator node

* Fix indentation (#27)

* add migrated tests

* rename test files

* uncomment args and remove extra collator node

* fix identation

* Review comments

* Remove unwanted version changes

Co-authored-by: Sebastian Kunert <skunert@Sebastians-MacBook-Pro.fritz.box>
Co-authored-by: Javier Viola <pepoviola@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-feat New feature or request
Projects
No open projects
Development

Successfully merging a pull request may close this issue.

2 participants