Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to resolve incorrect lock? #123

Closed
okwme opened this issue Jan 31, 2020 · 12 comments · Fixed by #182
Closed

How to resolve incorrect lock? #123

okwme opened this issue Jan 31, 2020 · 12 comments · Fixed by #182

Comments

@okwme
Copy link
Contributor

okwme commented Jan 31, 2020

I locked an ERC-20 on Eth which was minted on SDK as Coin. Then I locked that Coin on SDK in order to return it to Eth. Of course when doing this you should not lock but instead burn the Coin. I did the incorrect lock instead of burn. After adding error logging to the Eth side it throws an error that says:

> yarn peggy:process 1
yarn run v1.16.0
$ yarn workspace testnet-contracts peggy:process 1
$ truffle exec scripts/sendProcessProphecy.js 1
You can improve web3's peformance when running Node.js versions older than 10.5.0 by installing the (deprecated) scrypt package in your project
You can improve web3's peformance when running Node.js versions older than 10.5.0 by installing the (deprecated) scrypt package in your project
Using network 'ganache'.

Fetching Oracle contract...
Attempting to send processBridgeProphecy() tx...
Returned error: VM Exception while processing transaction: revert Token must be a whitelisted bridge token -- Reason given: Token must be a whitelisted bridge token.
Done in 9.56s.

What are the methods available for undoing an incorrectly made relay?
In a similar vein: what are the methods for relaying a transactions which was originally missed?

@musnit
Copy link

musnit commented Feb 26, 2020

This initial version of peggy is designed with the assumption that a transaction, if valid, will always succeed. This assumption should hold if a high enough threshold of validators/relayers are all operating correctly at all times. (ie, a lock will always result in a minted coin on the opposite chain, a burn will always result in an unlock on the opposite chain)

The whitelist was added as a temporary fix, until more information/features could be gathered around having Cosmos's Coin and Bank systems support multiple tokens with the same denomination. (#69)

So for example: in your scenario, it seems like COIN has been whitelisted, then COIN was locked on eth to create CosmosCOIN. You're trying to lock CosmosCOIN, which is not whitelisted, so it fails, preventing you from mistakenly succeeding at the incorrect lock. The way to resolve the above would to get CosmosCOIN whitelisted so that the transfer succeeds onto the Ethereum side and creates ETHCosmosCOIN, then to burn the ETHCosmosCOIN and get your CosmosCOIN back on the Cosmos side.

Another of the reasons for the whitelisting feature was done to avoid the scenario like yours above, ie, prevent a user from mistakenly locking a token when they ought to burn it, though this is still not fully prevented, as the Cosmos transaction still goes through. It may be good to implement the same whitelist on the Cosmos side so that the incorrect transfer couldn't happen in the first place. This would result in challenges with having to synchronise the whitelist on both ETH/Cosmos though. Another option would be to have the whitelist checked at the UX layer, as the user is making the incorrect transfer and prevents it on the frontend (ie, the CLI checks the Ethereum whitelists, or if it was a webapp, the webapp queries ethereum for the whitelist)

Again, ideally default support for all tokens including tokens with the same denomination seems most natural, and so aiming to get that feature supported on Cosmos may be the cleanest, in which case no whitelist should be needed. We considered implementing our own Coin/Bank that supported such, but it seemed like overkill and seemed like a feature that perhaps should be supported by core SDK stuff?

@okwme
Copy link
Contributor Author

okwme commented Feb 27, 2020

I like the prefix solution better as it is able to prevent a mishap at the level of the state machine. Plus there might be other situations where it's good to be able to tell what category of asset a denom is within the SDK. Being able to check if it was an wrapped asset or not could be useful. There's also been some talk about using something like a metadata module to store information about a coin, like vesting.
cosmos/cosmos-sdk#1980
cosmos/cosmos-sdk#4911

@musnit
Copy link

musnit commented Mar 20, 2020

I'm not sure if the prefix method is perfect.

With the prefix method, i suppose we would use something like a naming convention of:
{chainid}{bridgecontractaddress}{tokencontractaddress}_{tokensymbol} that would apply to tokens coming from ethereum. Tokens coming from cosmos would not be restricted to any naming convention.

This could still result in a weird conflict, say, for example, someone named their coin on cosmos, originating on cosmos as: single_collateral_stable_coin

When they tried to relay this, they would get weird unexpected issues, as we wouldn't be able to disambiguate a coin that originated on cosmos or on ethereum. I mean we could implement logic that checks the types of the first 3 prefixes and decides to assume the coin is ethereum-sourced if they match the above types and cosmos-sourced if not, but again it seems like a hacky fix based on assumptions as opposed to being super foolproof with real metadata.

Ideally there was this real token metadata system included either as part of our bridge modules or built into cosmos itself - that would then allow us to unambiguously check where a token is from, whether it's wrapped or not, and more as described in those linked issues.

Some of the above thinking, including waiting on a stable/defacto metadata system to emerge from the cosmos sdk team themself too is why we opted for a whitelisting based solution.

The current system does still have a process for resolving an incorrect lock:
1 - you make the incorrect lock, and it succeeds through the state machine
2 - relayer detects event from cosmos state machine but fails to transfer it, as it is not whitelisted
3 - you get your incorrectly-locked-token whitelisted through an ethereum transaction
4 - relayer tries again and/or you manually relay the lock again, now succeeds through ethereum state machine
5 - you get minted your incorrectly-locked-token on ethereum
6 - you burn your incorrectly-locked-token on ethereum
7 - relayer relays the burn back to cosmos
8 - your incorrectly-locked-token gets unlocked and you have it back

The above process is cumbersome, but should work. Another addition that may be helpful is if, before step 1, in ebcli, we query ethereum (through infura or own eth node) to check the whitelist and block your transaction at the cli-level, before it even gets to the state machine. This means that the above 8 steps would still be possible by someone not using the cli or using a modified cli, but would not be possible for someone using the normal tooling. The only concern here is we'd be adding an external dependency of Infura/Eth Node on the CLI for the lock command, though that may be fine.

Any thoughts?

@okwme
Copy link
Contributor Author

okwme commented Mar 20, 2020

It's true having the fully featured metadata would be a more complete solution. I don't think there's that much wrong with just settling on a naming convention and enforcing it at the bridge level though. For example if underscore was the decided demarcation, the bridge could restrict usage to only denoms that did not contain any underscores. Probably it would be good to do something a little less common, maybe :: but the idea stays the same.

Checking the whitelist from the CLI is another option but I agree it would be good not to bog down the CLI with that kind of action. Maybe adding the relayer as an endpoint that the CLI could query for that information is an option.

Your outlined solution makes sense beyond the fact that the relayer isn't capable of trying a failed transaction again. There's also the general problem of whitelisting a new token which can only be done by the operator. It seems like maybe any token coming from an SDK chain should be automatically whitelisted upon generation?

For the sake of concrete actions that can be made on the current codebase that would prevent an invalid lock, I'd suggest using a single namespace to denote a token coming from on chain or off chain. The complexities of tracking chain id and bridge address are to allow multiple chains and multiple instance of a bridge on the same chain from interacting with a single sdk chain. I think that is out of scope for this version of peggy and realistically we're talking about only one sdk chain talking to one eth chain via one bridge contract. adding a prefix like peg:: to denoms on the sdk side or on the eth side would be enough to distinguish the origin of the asset and automate the decision to burn or lock.

Alternatively an improvement to the UX of the relayer that is able to check the status of relayed events and detect when one failed and try again would be a method of supporting the userflow you outlined as well.

@denalimarsh
Copy link

I'm in favor of removing the whitelist and updating namespacing. In my opinion the value of peg::dai is directly correlated with the associated contract address and should be included in the namespace solution - a user could deploy a new ERC20 contract with denom dai, lock the worthless assets, and still receive peg::dai on the other side. I noticed that in the IBC relayer demo, chain ibc0's n0token has denom transfer/ibczeroxfer/n0token after its been transferred to chain ibc1. The relayer codebase has comments that indicate it's a placeholder solution and I like the :: convention, but should we be adhering to a similar namespacing strategy until a metadata solution is ready?

Also, we'll still need some sort of listed assets mapping on the contracts as we'd want to deploy a new token contract for the first lock of a cosmos-based asset but utilize the same contract for subsequent locks of the same asset.

@musnit
Copy link

musnit commented Mar 21, 2020

@okwme Cool, single namespace with a more uncommon prefix sounds fine.

Agree about tracking chain id and bridge address not that important - in fact, chain id and bridge address needs to be implicitly tracked by the relayers, and all relayers need to follow them the same, otherwise this will result in clashing oracle claims and so we effectively have implicit tracking of chain id and bridge address anyway.

@denalimarsh Agree, we do need to track contract addresses for tokens on both sides of the bridge though - are there any major implications for doing this on the Solidity side after removing whitelisting?

I think the suggested namespacing conventions and idea to check/sync with IBC are nice, but shouldn't block us from moving forward and are easy to change as long as we aren't running with real users/$$ and don't need to maintain backwards compatibility with changes. I'm sure we'll want to implement full IBC integration in a future phase of the peggy project, still before we have real users/$$, so conventions/changes can also be made then.

@denalimarsh
Copy link

If we remove the whitelist we’ll need to add a map of (cosmos-based asset denom => token contract address). Function completeProphecyClaim() will need to check this new map and conditionally deploy a new bridge token contract or use an existing bridge token contract. Currently newProphecyClaim() assumes a contract is already available (eth-based assets have an existing contract, cosmos-based assets have one created when they're added to the whitelist) and therefore requires a contract address.

Some thoughts on removing param _tokenAddress from newProphecyClaim():

  1. Could add separate entry points for burn/locks where the first burn of a cosmos-based asset deploys a new contract and passes the address along to newProphecyClaim().
  2. Drop the _tokenAddress for eth-based assets as well. Could add another map of (eth-based asset denom => token contract address) and then check it against the lockedFunds mapping that currently holds (eth-based asset token contract address => amount of locked tokens) to validate that funds are available to be unlocked before adding the new claim.

I agree about namespacing conventions. For now, let's just use the proposed peg::atom for cosmos-based assets on Ethereum and peg::dai for eth-based assets on Cosmos.

@musnit
Copy link

musnit commented Mar 23, 2020

@denalimarsh
1 - This would work, but if we have separate entry points/separate solidity functions for first vs later burn/locks, are you suggesting that the relayer would then need to distinguish between the two and pick one, or that the relayer would always hit the same entrypoint and it would distinguish in solidity? Latter would be easier for relayer but would mean extra gas being paid by it for that first deployment. Also, in terms of gas limits, would the latter have any risks/issues or should be fine?

@musnit
Copy link

musnit commented Apr 13, 2020

Update: Cosmos SDK only supports denoms with [a-z][0-9] characters and max 16. I'm using "peggy" instead of "peg::" as the prefix.

This also means there is no easy way to distinguish between two different Ethereum Tokens with the same symbol/name. Ideally I planned to include the Ethereum Token Contract address in the denom (eg: like, "peg::0x1203403450345::dai", or in denom metadata, but Cosmos SDK supports neither.

The bridge will be vulnerable to an attacker that duplicates an ERC20 with the same name, mints duplicate coins for themselves, then transfers them across the bridge to the Cosmos side. This vulnerability will need to be fixed in a future version. I'll add this to the README to make it clear.

@okwme
Copy link
Contributor Author

okwme commented Apr 16, 2020

Another solution would be to keep a registry of Eth based token denoms stored with their contract address and to create cosmos based assets with an incrementing name for collisions. So for instance the first time a contract with "Dai" is as a denom is moved over the contract address is stored and the second time a claim is relayed as "Dai" but with a different address the SDK can distinguish and will mint as "Dai-2" or something?

@musnit
Copy link

musnit commented Apr 16, 2020

@okwme Yep, this would basically effectively mean implementing a custom bridge-specific token metadata module. Ideally Cosmos SDK provides that and I think there is already talk of that?

Anyway, could either build a custom metadata module into the bridge or into Cosmos itself as part of the next version, depending on feedback from the SDK team.

@okwme
Copy link
Contributor Author

okwme commented Apr 16, 2020

rightiright i remember that discussion now. Sounds good, I think this solution works til then

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants