Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: ABCI 1.0 baseapp integration #13453

Merged
merged 261 commits into from
Nov 9, 2022
Merged

feat: ABCI 1.0 baseapp integration #13453

merged 261 commits into from
Nov 9, 2022

Conversation

kocubinski
Copy link
Member

@kocubinski kocubinski commented Oct 5, 2022

Description

Ref: #12272

Summary of changes

  • mempool/tx: Remove mempool.Tx interface and notion of transaction size. Byte counting logic pulled up to baseapp.
  • mempool: Update mempool.Select to use iterator semantics instead of enumerating the mempool all at once.
  • mempool: Add tests with production transactions exercising multi signer logic.
  • baseapp: Implementation of PrepareProposal and ProcessProposal (ABCI 1.0) with default app-side mempool.

Author Checklist

All items are required. Please add a note to the item if the item is not applicable and
please add links to any relevant follow up issues.

I have...

  • included the correct type prefix in the PR title
  • added ! to the type prefix if API or client breaking change
  • targeted the correct branch (see PR Targeting)
  • provided a link to the relevant issue or specification
  • followed the guidelines for building modules
  • included the necessary unit and integration tests
  • added a changelog entry to CHANGELOG.md
  • included comments for documenting Go code
  • updated the relevant documentation or specification
  • reviewed "Files changed" and left comments if necessary
  • confirmed all CI checks have passed

Reviewers Checklist

All items are required. Please add a note if the item is not applicable and please add
your handle next to the items reviewed if you only reviewed selected items.

I have...

  • confirmed the correct type prefix in the PR title
  • confirmed ! in the type prefix if API or client breaking change
  • confirmed all author checklist items have been addressed
  • reviewed state machine logic
  • reviewed API design and naming
  • reviewed documentation is accurate
  • reviewed tests and test coverage
  • manually tested (if applicable)

kocubinski and others added 30 commits September 13, 2022 10:03
* feat(mempool): tests and benchmark for mempool

* added base for select benchmark

* added simple select mempool test

* added select mempool test

* insert 100 and 1000

* t

* t

* t

* move mempool to its own package and fix a small nil pointer error

* Minor bug fixes for the sender pool;

* minor fixes to consume

* test

* t

* t

* t

* added gen tx with order

* t
Copy link
Contributor

@alexanderbez alexanderbez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! I have no major changes to recommend. There were a few minor nit comments.

Only outstanding item seems to be around the semantics of DefaultProcessProposal and how to handle invalid txs.

baseapp/abci.go Outdated Show resolved Hide resolved
baseapp/baseapp.go Outdated Show resolved Hide resolved
x/distribution/types/codec.go Show resolved Hide resolved
baseapp/baseapp.go Show resolved Hide resolved
return err
_, _, _, _, err = app.runTx(runTxProcessProposal, txBytes)
if err != nil {
err = app.mempool.Remove(tx)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh right, so when a honest process receives a proposal and executes it, any txs that are deemed invalid should still be included in the proposal. I think this is what @sergio-mena is alluding to. In other words, we should not reject or remove the tx from the mempool.

baseapp/abci.go Outdated Show resolved Hide resolved
kocubinski and others added 4 commits November 8, 2022 11:40
Copy link
Contributor

@alexanderbez alexanderbez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Giving an ACK here. But we do need to undo the changes to x/distribution/types/codec.go

baseapp/abci.go Outdated Show resolved Hide resolved
baseapp/baseapp.go Outdated Show resolved Hide resolved
alexanderbez and others added 3 commits November 8, 2022 15:27
Co-authored-by: Aleksandr Bezobchuk <alexanderbez@users.noreply.github.com>
//
// If any transaction fails to pass either condition, the proposal is rejected.
func (app *BaseApp) DefaultProcessProposal() sdk.ProcessProposalHandler {
return func(ctx sdk.Context, req abci.RequestProcessProposal) abci.ResponseProcessProposal {
Copy link
Contributor

@prettymuchbryce prettymuchbryce Nov 8, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Food for thought: As an initial implementation for ABCI 1.0 why not just always return abci.ResponseProcessProposal_ACCEPT?

Some benefits:

  • It gives applications the exact same behavior that exists in the SDK today, therefore reduces potential for any liveness risks or bugs.
  • It's simpler and easier for you to test.
  • It's more performant.

My understanding is that the introduction of ProcessProposal was not intended specifically to solve some existing consensus issue on Cosmos SDK chains, rather it was introduced to give application developers flexibility (like us at dYdX) to reject proposals. So even if your default implementation always returns abci.ResponseProcessProposal_ACCEPT, you are still fulfilling the promise of ProcessProposal by allowing application developers to optionally implement their own logic via SetProcessProposal.

If in the future if there emerges a strong case to return REJECT in DefaultProcessProposal (i.e. if you want to validate that the transactions in a block do not exceed MaxGas, or something like that), then you could easily introduce such changes iteratively in a future version of the SDK.

Running every transaction through CheckTx seems a bit extreme to me and it's not clear that it solves any specific problem that exists in any existing chain today, however I could be ignorant or wrong about this. It also just actively makes upgrades more risky. For example, is it possible that some existing chain could have added some custom AnteHandler which doesn't necessarily expect to be invoked during ProcessProposal? Is this something that developers upgrading to v0.47 be aware of and audit in their codebases when upgrading?

In any case, dYdX will be using SetProcessProposal so the default behavior is of no concern to us. This is merely my own personal opinion on a more iterative upgrade path for the SDK that reduces some of the surface area while still retaining the most important feature (the ability for developers to customize ProcessProposal via SetProcessProposal).

Copy link
Member Author

@kocubinski kocubinski Nov 9, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @prettymuchbryce you make some good points. I've captured them in #13810 for further review and implementation. If we decide it's something we want @JeancarloBarrios or I can knock it out pretty quickly. At this moment to change direction would mean a thoughtful rewrite. Perhaps it's warranted but this PR was opened Oct 5 so we feel this iteration is good enough for an MVP. Let's continue the discussion! 🙏

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes sense. Thanks for humoring me! 😄


// NOTE: runTx was already run in CheckTx which calls mempool.Insert so ideally everything in the pool
// should be valid. But some mempool implementations may insert invalid txs, so we check again.
_, _, _, _, err = app.runTx(runTxPrepareProposal, bz)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't necessarily understand the point of forcing the runTx behavior on all app-side mempool implementations. If anything, this just makes the SetMempool API more restrictive. It also forces the performance burden on all custom app-side mempools of running all transactions through the Antehandlers a third time (CheckTx, DeliverTx, and now PrepareProposal). As stated in the comment here this shouldn't be necessary unless the application developer's custom app-side mempool implementation has a bug or is trying to do something unexpected. They can already break themselves in other ways, for example a developer could add some panic to their Select implementation which means no proposal would ever be sent, right?

Just a suggestion and not a dealbreaker, but I think I'd prefer to be able to opt out of this somehow.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • AnteHandlers add no performance burden -- that's the entire point of them. Lightweight checks prior to tx execution.
  • We add validity checks here stated per the ADR. In addition, it's possible for an app to include/inject txs on Insert during CheckTx so this ensures any injected txs are still valid.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I disagree that they add no performance burden. Some of the checks read from state which could potentially mean disk reads if the state is not in cache. There is also signature verification which is not free.

A couple of examples of reading from state during Antehandlers:

  • Reading the Account in ConsumeTxSizeGasDecorator here.
  • Calling SendCoinsFromAccountToModule in DeductFees which reads module accounts from state here.

There is also signature verifications here which looks to take ~0.15ms per-op for secp256k1 if I am understanding the benchmarks correctly (Sig/secp256k1-8). This means while ABCI is globally locked during PrepareProposal, you're doing this operation for each transaction serially. So if the block has 100 transactions on average, you're spending ~15ms on average per-block blocking the entire application and verifying transaction signatures that have already been verified.

Let me know if I'm misunderstanding something!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I disagree that they add no performance burden. Some of the checks read from state which could potentially mean disk reads if the state is not in cache. There is also signature verification which is not free.

True, state is read from but not written to. In the grand scheme of things, the performance overhead is near negligible.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True, state is read from but not written to. In the grand scheme of things, the performance overhead is near negligible.

Just to be super clear, 15 milliseconds of additional global lock time per-block (as in my above signature example) is definitely not negligible for us, especially given this is totally unnecessary computation in our case. It's very likely we're a unique case given our throughput aspirations so therefore your assessment is probably true for most other chains.

Copy link
Member

@tac0turtle tac0turtle Nov 9, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree with @prettymuchbryce, on profiles I have run on live networks ante handlers show up quite often and do add a non negligible amount of overhead. For many chains this isn't a problem but it's not something we should over look as we are trying to make the sdk more performant.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, we validate the txs (i.e. AnteHandler) since applications can insert txs on Insert during CheckTx and we need to ensure these are still valid when selecting during PrepareProposal.

Also, recall:

  1. The application defines the AnteHandler chain
  2. The application and thus it's AnteHandler is aware of tx execution context (i.e. PrepareProposal)

So if this doesn't suite your needs, your AnteHandler chain can easily do something like "If PrepareProposal, skip XYZ".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The application defines the AnteHandler chain
So if this doesn't suite your needs, your AnteHandler chain can easily do something like "If PrepareProposal, skip XYZ".

This is a really good point that I had forgotten. I think this would be reasonable solution.

@kocubinski kocubinski merged commit 61effe8 into main Nov 9, 2022
@kocubinski kocubinski deleted the kocubinski/mempool-all branch November 9, 2022 15:50
} else if byteCount += txSize; byteCount <= req.MaxTxBytes {
txsBytes = append(txsBytes, bz)
} else {
break
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Food for thought:

This else block seems to suggest that the txs queued up in the iterator won't be part of the returned response once the MaxTxBytes has been reached. Not processing all the queued up txs is not desirable in certain app scenarios.

For example, let's say Select returns txs [A1, A2, B1, B2] but PrepareProposal only processes [A1, A2] and leaves out [B1, B2], because the MaxTxBytes has been reached. Select returned the txs in the above order, because the app has an invariant that txs of type A must appear before txs of type B, but from the app's perspective, txs of type B are much more important to include in the block than type A. So in this case, the invariant is respected, but the app doesn't get to include txs of type B and is not alerted about this behavior.

Thinking out loud, to prevent this issue, the app can validate that the txs returned in Select is under the MaxTxBytes cap. However, the app doesn't get to learn if all the txs returned in Select were processed or not (it speculates that the txs were processed by PrepareProposal, but there's no explicit signal).

Thinking out loud, some suggestions:

  • Hint MaxTxBytes as part of Select
  • Add a way for the app to learn that not all txs returned in Select were processed by PrepareProposal and retry Select / PrepareProposal
  • Add a mode where "process all txs returned by Select or fail"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason we don't pass MaxBytes to Select is that we need to ensure that selected txs are valid.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed this further offline. #13831 fixes the concern.

@@ -507,16 +550,25 @@ func validateBasicTxMsgs(msgs []sdk.Msg) error {
// Returns the application's deliverState if app is in runTxModeDeliver,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: should this comment get updated?

return app.prepareProposalState
case runTxProcessProposal:
return app.processProposalState
default:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: should these cases explicitly switch on each of the mode values and have the default panic to catch any unhandled modes?

if err != nil {
return gInfo, nil, anteEvents, priority, err
}
} else if mode == runTxModeDeliver {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for my own understanding, why is it okay to not handle other types of runTxMode?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We remove during DeliverTx (stated in ADR).

// mempool, up to maxBytes or until the mempool is empty. The application can
// decide to return transactions from its own mempool, from the incoming
// txs, or some combination of both.
Select(txs [][]byte, maxBytes int64) ([]Tx, error)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Related to the above comment about MaxTxBytes, I see that the previous impl hints the max bytes. Why is this removed? Was it because: "with iterators, PrepareProposal can handle the cutoff point"?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly right.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C:x/auth C:x/distribution distribution module related C:x/genutil genutil module issues
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

10 participants