Skip to content

Conversation

@nicholaspai
Copy link
Member

No description provided.

@nicholaspai nicholaspai marked this pull request as ready for review February 3, 2022 18:17
@nicholaspai
Copy link
Member Author

@mrice32 @chrismaree not sure why but I the yarn.lock wasn't regenerating after I added something to the package.json, so i removed and then re-added and it caused a big diff

Copy link
Contributor

@mrice32 mrice32 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! Only minor questions/comments

* INTERNAL FUNCTIONS *
**************************************/

function _setCrossDomainAdmin(address newCrossDomainAdmin) internal {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any way to call this function after construction? Is this something we'd like to be able to change? (genuine question, not suggesting a change)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is a setCrossDomainAdmin that can call this after construction enabling us to change it. I think we'd want to change it in the event something broke within the L1<->L2 logic. one problem, though, that I can see with this pattern is it assumes that the cross domain owner implements sufficient methods to call setCrossDomainAdmin over the bridge. this should be doable as this is going to be sitting within the "Adapter" logic that I've created for the L1->L2 calls. We can create a base contract (CrossDomainAdminBase) or something equivalent that these adapters will inherit from. can think more on this in the next few PRs.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah i added in this function without telling you. We should allow this to be changed from L1

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I'm a little confused. Are there two functions that allow you to setCrossDomainAdmin? If so, can we consolidate them into a single function?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No there's only one, implemented in this contract. setCrossDomainAdmin is included in the SpokePoolInterface

}

// TODO:
function setDepositQuoteTimeBuffer(uint64 buffer) public onlyFromCrossDomainAccount(crossDomainAdmin) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any interface or anything that ensures that these functions match across all SpokePool implementations?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great point I'll add an interface

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you think all L2 spoke pools will immplement the notion of a crossDomainAdmin? @chrismaree


export const zeroRawValue = { rawValue: "0" };

export const spokePoolRelayerRefundRoot = createRandomBytes32();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a const for this in my other PR called mockDestinationDistributionRoot.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will merge that in


contract Optimism_SpokePool is CrossDomainEnabled, SpokePoolInterface, SpokePool {
// Address of the L1 contract that acts as the owner of this SpokePool.
address public override crossDomainAdmin;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@chrismaree @mrice32 should crossDomainAdmin be in the SpokePool or do you think this notion of a cross domain admin will be different for each. L2?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the cross-domain admin? The optimism bridge address on L2? Or is it the BridgeAdmin address on L1?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its the contract that calls relayMessage on the canonical bridge, so its either the BridgeAdmin or the HubPool

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for now HubPool me thinks. we might want to change this later but I think lets leave it there optimistically at the moment.

Copy link
Contributor

@mrice32 mrice32 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!


contract Optimism_SpokePool is CrossDomainEnabled, SpokePoolInterface, SpokePool {
// Address of the L1 contract that acts as the owner of this SpokePool.
address public override crossDomainAdmin;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the cross-domain admin? The optimism bridge address on L2? Or is it the BridgeAdmin address on L1?


struct RelayerRefund {
// Merkle root of relayer refunds.
bytes32 distributionRoot;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One interesting tradeoff about doing claims via merkle proofs one-by-one is that this will result in increased calldata costs on L2 over passing in all the leaves and storing (or maybe even executing) in a single call. This is because you will have to submit each leaf individually with another bytes32 element for each layer in the tree. This means that your overall calldata might be significantly larger than just passing in the leaves alone. I'm not sure how significant this would end up being, but I wonder how hard it would be for us to take in all the leaves and generate the proofs internally on-chain to show that they roll up into the root.

Not suggesting a change, just wanted to mention this.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm so you think this could work by the HubPool relaying all leafs to the SpokePool here and storing them on-chain, as opposed to just relaying the root. @chrismaree wdyt?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dont think @mrice32 is proposing the HubPool passing the leafs; it cant do that in a efficient way. I think @mrice32 is proposing a technique where you can execute multiple leafs at once in a compressed structure and prove they are within the provided root. @mrice32 what you describe would fit into the execution stage of the process, right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do you imagine we improve on just making the contract multicall and assuming distributeRelayerRefund is called multiple times in one batched txn?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanna be super clear that I don't think we should do this now, and I think before we add this complexity, we should try to better understand the amount of calldata/gas savings we'd see from something like this with different tree sizes.

Agree @chrismaree. I'm not suggesting the HubPool do anything different since any added data anywhere on mainnet is going to be expensive.

If you did multicall, you would effectively be passing:

[{distributionStruct1, proofArray1}, {distributionStruct2, proofArray2}...]

What I'm suggesting is an additional method that allows you to pass all leaves this way:

[{distriubtionStruct1}, {distributionStruct2}, ...]

If we get the logic right in the contract, the leaf data could be used to generate the proofs so you wouldn't need to pass them in, thereby reducing calldata.

Co-authored-by: Matt Rice <matthewcrice32@gmail.com>
@nicholaspai nicholaspai requested a review from mrice32 February 4, 2022 23:51
Copy link
Member

@chrismaree chrismaree left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this looks great to me! keen to get the next bit of functionality in with the proofs.

@nicholaspai
Copy link
Member Author

Just giving @mrice32 a chance to comment here before merging

@nicholaspai nicholaspai merged commit 9efe41b into master Feb 7, 2022
@nicholaspai nicholaspai deleted the npai/relay-refund-spoke-pool branch February 7, 2022 17:24
pxrl pushed a commit that referenced this pull request Feb 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants