Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: [M-01] zkSync limits #328

Merged
merged 2 commits into from Jul 28, 2023
Merged

fix: [M-01] zkSync limits #328

merged 2 commits into from Jul 28, 2023

Conversation

mrice32
Copy link
Contributor

@mrice32 mrice32 commented Jul 28, 2023

This PR adds a proxy contract that could be used to bypass zk sync deposit limits by address. To allow for this proxy to be swapped out, the ETH sending address was separated from the address used to send contract calls, so one could be swapped out without affecting the other.

Signed-off-by: Matt Rice <matthewcrice32@gmail.com>
Copy link
Member

@nicholaspai nicholaspai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like the most straightforward implementation of this and I like that the default case doesn't use the proxy. In the case where we do get limited, how would the remedy work.

Would we deploy the proxy and try again? and if we get limited again, then we redeploy yet again? I'm kind of skeptical that the HubPool would ever get limited and we'd subsequently be able to bypass the limit with another address.

It seems like if zksync truly wanted to censor us they could keep updating their limits for our new proxy addresses so the only way around it in that case would be to split the sends up by multiple callers... but if it ever comes to that we probably have bigger problems

Copy link
Contributor

@james-a-morris james-a-morris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - there are multiple instances of SolHint issues and I wonder if we should modify the linter config or modify the code to conform to it

// A modified ZKSync_Adapter should be deployed with this address swapped in for all zkSync addresses.
contract LimitBypassProxy is ZkSyncInterface, ZkBridgeLike {
using SafeERC20 for IERC20;
ZkSyncInterface public constant zkSync = ZkSyncInterface(0x32400084C286CF3E17e7B677ea9583e60a000324);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Solhint is indicating that this should be in uppercase snake case ZKSYNC. Should we modify the solhint?

Copy link
Contributor Author

@mrice32 mrice32 Jul 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@james-a-morris @pxrl Since we generally don't snake case contract interfaces, I removed the solhint warning for now to appease the linter and kept this consistent. If we decide to change in the future, we can revert this config change. But for the audit, I wanted to avoid making any changes outside of the scope.

Signed-off-by: Matt Rice <matthewcrice32@gmail.com>
@mrice32
Copy link
Contributor Author

mrice32 commented Jul 28, 2023

This seems like the most straightforward implementation of this and I like that the default case doesn't use the proxy. In the case where we do get limited, how would the remedy work.

Would we deploy the proxy and try again? and if we get limited again, then we redeploy yet again? I'm kind of skeptical that the HubPool would ever get limited and we'd subsequently be able to bypass the limit with another address.

It seems like if zksync truly wanted to censor us they could keep updating their limits for our new proxy addresses so the only way around it in that case would be to split the sends up by multiple callers... but if it ever comes to that we probably have bigger problems

Agreed, we would basically be deploying a new adapter with a new proxy address over and over. An auto-deploy would be a little better, but would also be more complex, so I wanted to avoid doing something overly complicated for something that is unlikely to be needed.

@mrice32 mrice32 merged commit 7886269 into master Jul 28, 2023
5 checks passed
@mrice32 mrice32 deleted the m_01 branch July 28, 2023 22:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants