Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Separate Tokenomics from Core Protocol to a Controller Contract #13791

Closed
dantaik opened this issue May 22, 2023 · 8 comments
Closed

feat: Separate Tokenomics from Core Protocol to a Controller Contract #13791

dantaik opened this issue May 22, 2023 · 8 comments

Comments

@dantaik
Copy link
Contributor

dantaik commented May 22, 2023

Currently, tokenomics is tightly integrated with the core protocol. I suggest decoupling the tokenomics code into a distinct contract. This division would:

  1. Streamline the core protocol, improving testability.
  2. Promote codebase reusability, enabling others to employ it independently of our tokenomics.
  3. Simplify tokenomics testing by eliminating the need for proposing/proving actual blocks.
  4. Allow concurrent tokenomics deployment across different L3 testnets without altering the core protocol.
  5. Facilitate safer upgradability, leaving the core protocol undisturbed when tokenomics updates are necessary.

This decoupling, however, might inflate gas costs.

Certain implementation precautions are necessary:

  1. The tokenomics contract should not hold custody of user-deposited TaikoTokens; this role should remain with the core protocol to prevent token migration during a tokenomics switch or we should simply avoid user-deposited TaikoTokens (see below).
  2. To minimize gas costs, a single call to the tokenomics contract should be implemented when multiple blocks are verified.
  3. If tokenomics lookup resolves to '0x0', it signifies the absence of tokenomics (the core protocol shall still work as a permissionless rollup)

It may be beneficial to rename 'tokenomics' to a more representative term like controller. This contract could whitelist proposer/prover addresses without token or tokenomics involvement. For example, a company could create a simple controller contract for deploying a centralized, permissioned Taiko rollup without a token.

Considering the core protocol may not require a token, we might want to remove taikoTokenBalances and associated functions from the core protocol. The controller contract could then undertake token minting/burning operations if tokens are introduced.

@dantaik dantaik changed the title feat: Decouple Tokenomics from core protocol for improved modularity, testability and upgradability feat: Separate Tokenomics from Core Protocol to a Controller Contract May 22, 2023
@dantaik dantaik assigned adaki2004 and Brechtpd and unassigned Brechtpd May 22, 2023
@Brechtpd
Copy link
Contributor

With tokenomics you mean the proposer selection/prover selection/fee system right? (I think that goes well above what people would normally understand under tokenomics so just making sure).

Yes I think we should do this, same with the verifier contracts this will otherwise get too messy if we want to make this more flexible/modular.

This decoupling, however, might inflate gas costs.
To minimize gas costs, a single call to the tokenomics contract should be implemented when multiple blocks are verified.

If it makes things significantly simpler I think it should be fine to call for each block. With how current gas costs are set up to make warm storage reads/writes very cheap compared to cold ones (even true for calls), the gas savings probably won't be that significant while the complexity of batching things could be high (and also costs gas).

The tokenomics contract should not hold custody of user-deposited TaikoTokens; this role should remain with the core protocol to prevent token migration during a tokenomics switch or we should simply avoid user-deposited TaikoTokens (see below).
Considering the core protocol may not require a token, we might want to remove taikoTokenBalances and associated functions from the core protocol. The controller contract could then undertake token minting/burning operations if tokens are introduced.

There could still be a shared independent smart contract where funds could be stored that could be used by multiple Taiko instances (sure, costs some extra gas), it'll be up to the specific instance to use that or not, but could make sense depending on how the selection mechanism works.

@dantaik
Copy link
Contributor Author

dantaik commented May 23, 2023

Brecht, appreciate your insights. Contemplating on this issue and Dani's PR, which isolates verifier logic into a separate contract, I still want to advocate for cost-optimization over composability. In the long run, any third party utilizing our code will inevitably modify it to minimize gas costs and eliminate inter-contract calls, nullifying the benefit of contract division.

I propose leveraging inheritance. The core protocol could simply define empty virtual functions, allowing derived contracts to override these with bespoke logic.

We should experiment to measure the difference in gas cost with these two options.

@dantaik
Copy link
Contributor Author

dantaik commented May 23, 2023

I've conducted some testing on (this PR) and would like to share some of my observations:

  1. controller contract: I found defining a future-proof interface for the controller contract to be a significant challenge. If we aim to make this more future-proof, we may need to pass a large amount of data to the afterBlockProposed(...) function. This makes the transaction more costly. Conversely, if we don't include enough data in the function, it may necessitate alterations to the core protocol later on, especially to accommodate new tokenomics designs. If the controller interface isn't relatively stable, splitting the tokenomic logic into separate contracts appears less sensible.

  2. overridable hook/callback functions: I've attempted to define overridable virtual functions such as afterBlockProposed and afterBlocksVerified in the PR. However, I'm not entirely convinced by the changes introduced. I found that having tokenomics logic built directly into proposeBlock and verifyBlocks is not only simpler to implement, but also easier to understand.

Based on my testing, I am leaning towards maintaining the current design - keeping both the proof verification logics and tokenomics within the protocol, and concentrating on minimizing transaction costs, rather than excessively optimizing for composability. I believe a well-planned storage data layout will allow for user upgradability to replace the implementation, if required. Splitting the logics into multiple contracts ends up with exposing internal logics as external interfaces, which could potentially harm upgradability.

@Brechtpd
Copy link
Contributor

I would actually argue for the opposite, too much focus on gas costs at this point. It's very easy to optimize things once we're happy with the functionality and design (in this case: just replace the external contract call by an internal library call and you're done). Optimizing each step towards that final design is just wasted effort I feel.

I agree that we don't want to be more flexible than needed and so don't want to add functions/APIs that we don't need at this point.

@Brechtpd
Copy link
Contributor

So a bit more on why I think optimizing at this point is early: things will still change, pretty significantly as far as I currently know, and who knows what else will come up in the next months. And even the things we know that still have to change aren't fully figured out yet.

  • One with a very big impact on gas costs is block aggregation: Block aggregation circuit zkevm-circuits#85. Depending on how this is implemented, this will very likely impact how blocks are proven and how the prover fee system works
  • Once that's figured out I think we can look at the prover system again: auction, the proving system that shall not be named, ... So potentially many changes to how blocks are proven and paid for.
  • SGX prover: feat(protocol): basic multi-prover with SGX #13693: Still to be fully figured out, but unlikely to work with a single ECDSA signature if we want to do it in a decentralized way with multiple provers being able to run their own SGX provers. (They'll have to prove that what they are running is genuine, and then they can generate their own proofs using their own keys will need some additional smart contract support which will very likely make sense to do in its own contract). Also multiple nodes in SGX, different TEE support, etc...
  • Unprovable blocks: feat(protocol): Unprovable blocks #13724. Likely only minor changes necessary to the smart contracts/circuits, most of the changes should be on the node level
  • Different testnet/L3 setups that is just throw away code and ideally should not touch the actual core contracts at all
  • More that I forgot or we simply don't know yet

Implementing these and making sure they work: not so easy.
Optimizing code that works and is tested: pretty easy I think.

@adaki2004
Copy link
Contributor

adaki2004 commented May 24, 2023

I'm somewhat in the middle of this. I'd like to separate the logic from maintenance/upgradeability/readability point of view but what if we can have a good compromise ?

As we tested with the verifier, separating this into 2 different deployable contracts would cost somewhere around 4-5k gas with the current design and data on-chain (which is kinda significant: 0.009 USD ATM / block).

And even the things we know that still have to change aren't fully figured out yet.

This would also most probably imply, we need to upgrade the protocol anyways, so one advantage of separating the protocol vs. the proof verification into different deployed contracts most probably gone but the fee stays, so it's not a win-win.

TBH, i like the way we outsourced the proof verification on this PR to different libraries. Maintainable, maybe we could even do an aggregated internal function (as we did already some commits ago - just removed due to gas optimization) but all in all i'm somewhere in the middle.

@Brechtpd
Copy link
Contributor

Honestly I'm probably also somewhere in the middle!

My main point is simply that for me it seems we should optimize for simplicity and ease of changing things. We're not planning on using the protocol on mainnet very soon so why worry so much about gas costs? Realistically we're gonna go through a couple of iterations before we get something we'll actually want to deploy on mainnet, so I don't see a reason we should try and optimize each iteration as much as possible and then throw it away anyway.

@adaki2004
Copy link
Contributor

Honestly I'm probably also somewhere in the middle!

My main point is simply that for me it seems we should optimize for simplicity and ease of changing things. We're not planning on using the protocol on mainnet very soon so why worry so much about gas costs? Realistically we're gonna go through a couple of iterations before we get something we'll actually want to deploy on mainnet, so I don't see a reason we should try and optimize each iteration as much as possible and then throw it away anyway.

Yes, this is why i say im on the middle too - not too focused on gas (tho we should be reasonable with them) but rather to have something which is not a nightmare to maintain during the upcoming iterations. :)

To me main question is: which one is easier and more practical ?
Outsourcing later a modular but monolith codebase to different deployable contracts ? Or the other way round ?
I think, the former one is easier, simply bc you don't have to keep track of deployed contracts, addresses, proxies, dependencies in general. This is the reason i'd separate them well, with clear interfaces, but 'in-house'.

@dantaik dantaik closed this as completed May 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Done
Development

No branches or pull requests

3 participants