Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ivan's review on ONEWallet contract #3

Closed
wants to merge 1 commit into from

Conversation

polymorpher
Copy link
Owner

No description provided.


// IH: you can drop nonce stuff if you implement commits as mapping (hash => Commit). After reveal you might delete the Commit entry; the timestamp's fresheness will ensure protection against replay in the future.
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Relying on deletion (while dropping nonce) would seem insufficient. Consider the scenario which a commit is revealed during the same 30-second interval, but may span across multiple blocks (recall Harmony generates a block per 2 seconds). If a commit is deleted after the corresponding reveal is completed and there is no nonce tracking, an attacker may immediately recommit another transaction using a correct proof but with modified parameters (e.g. different transfer destination and amount), then immediately reveal it. Since the commit is already deleted and nonce is not used, the smart contract would have no way to identify that this new transaction should be rejected.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you are right. I've somehow already assumed HOTPs, where this cannot happen by default since SC remembers the IDs of the used ones while allowing only not yet used. But going back to TOTPs - I'd still suggest to drop the nonce stuff (as too verbose) and do the things simpler. So, to resolve the issue with deleting from the map of commits, I'd use a dedicated cleanup external method (called once in a while), while using another EnumerableMap (EOTPs => timestamps) to track EOTPs to delete, and their timestamps of expiration. In this method, SC would just iterate over EOTPs in the map and check the time validity of EOTP according to block.timestamp, followed by the delete from both maps with the purpose of saving the storage space. Also, in any revealX(), SC would need in addition to insert data into the second map.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is feasible. One issue is the caller of the external method would have to pay for the gas to cleanup. The attacker should pay for that, if an attack exists. But this can be mitigated by making this method internal and call it sporadically as part of the commit process.

But the primary purpose of nonce is to enable multiple operations per interval, not tracking or managing commits. Each nonce value corresponds to an EOTP inside an interval. Each interval has one (T)OTP, but has maxOperationsPerInterval number of EOTPs (corresponding to each nonce). The current method does not track used EOTPs. It simply requires a transaction to use a higher nonce than before (thus must correspond with different EOTP).

The method for managing and tracking commits are independent from nonce.

@@ -38,16 +39,16 @@ contract ONEWallet {
height = height_;
interval = interval_;
t0 = t0_;
lifespan_ = lifespan_;
lifespan_ = lifespan_; // IH: what are the expected values?
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be (validity time in seconds / interval) where interval should be 30 for standard uses. I intend to use a default value of 1 year for validity time in the client.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is not it too short? Me as the user would not be happy to make some updates to the wallet every year. I'd prefer at least five years or more - optimally, 10 years.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One year is more than enough for the basic wallet. The feedback I got is to make it even shorter, since we are going to upgrade the wallet pretty soon anyway. Later we can provide a way to refresh wallet lifespan after the user activates another authentication factor.

@@ -64,30 +65,33 @@ contract ONEWallet {
(uint32 ct, bool completed) = _findCommit(hash);
require(ct == 0 && !completed, "Commit already exists");
Commit memory nc = Commit(hash, uint32(block.timestamp), false);
commits.push(nc);
commits.push(nc); // IH: would not it be better to have mapping of commits?
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed on Telegram. To include the discussion:

I had an early version of commit with map but quick scratched it. The issue is an attacker can commit a lot of garbage (at the expense of their own gas) and make the contract storage unnecessarily large, and unbounded. Thus, there needs to be some mechanisms for removing the map keys. Since we can’t iterate a map in solidity, we have to use an array to track it. Therefore, we have Commit[]. Nice catch with uint8 - forgot to fix that

You replied

For this cases is usefull EnumerableMap from openzeppelin library. Maybe you can check it

I checked EnumerableMap. It seems they rely on using an array to track the keys all the same. The contracts are quite heavy: https://github.com/OpenZeppelin/openzeppelin-contracts/blob/master/contracts/utils/structs/EnumerableSet.sol and https://github.com/OpenZeppelin/openzeppelin-contracts/blob/master/contracts/utils/structs/EnumerableMap.sol

So I think using a simple array to track and clean the commits are more gas-efficient than using EnumerableMap. Besides, in C++ and Java, looking up an element using linear search over a small array is actually much faster than using a map structure. I don't know if the same would apply to Solidity and EVM, but I suspect it is likely to behave the same.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Linear search might be a problem, where the user might run out-of-gas thanks to the attacker sabotaging this functionality. The attacker can commit enough entries to ensure out-of-gas for the user. That is why I'd prefer to use enumerable maps which do not suffer this problem, and moreover are verified by the community and openzzpellin, which is their biggest advantage. Regarding gas under normal circumstances, I'd say it will be comparable to the current approach, but we would have try it to get some clue.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I will add a lookup map. I want to minimize the use of external dependencies at the core level, unless we have complex logics that really require these libraries. They make debugging and analysis harder, and uses more gas.

}

// TODO (@polymorpher): during reveal, reject if index corresponds to a timestamp that is same as current block.timestamp, so we don't let the client accidentally leak EOTP that is still valid at the current time, which an attacker could use to commit and reveal a new transaction. This introduces a limitation that there would be a OTP_INTERVAL (=30 seconds by default) delay between commit and reveal, which translates to an unpleasant user experience. To fix this, we may slice OTP_INTERVAL further and use multiple EOTP per OTP_INTERVAL, and index the operations within an interval by an operation id. For a slice of size 16, we will need to generate a Merkle Tree 16 times as large. In this case, we would reject transactions only if both (1) the index corresponds to a timestamp that is same as current block.timestamp, and, (2) the operation id is not less than the id corresponding to the current timestamp's slice
// IH: what do you state above just supports HOTPs instead of TOTPs, besides other advantages.
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We also discussed this in the chat. As I mentioned

By the way this TODO was already done. Forgot to remove the comment.

Nonetheless, I agree switching to HOTP would eliminate this problem, since we would allow each OTP (and associated proof) to be used exactly once. But the nonce-mechanism implemented here also achieves the same result. My main concern for using a HOTP-based mechanism in the single-factor setting is the following:

(1) since we have to allow synchronization of counters between smart contract, authenticator, and client, we would also have to always accept HOTPs (and their proofs) and later indices. This means the permission to operate the smart contract wallet is not evenly divided into time-slices, as we would have nicely in the current TOTP-based mechanism. Instead, each index would essentially give a permission to operate the wallet once, but once a later index is used, it would invalidate an earlier index, rendering the earlier index useless.

(2) The client needs to load all hashed, salted HOTPs in proximity to the expected counter, to effectively synchronize the counter with the authenticator. In TOTP based mechanism, the client only needs to load one or two by pinpointing the current time. The difference is not significant at present, but consider in the future we may store the hashed-salted OTPs in a more secure storage, the difference would mean that the former approach (HOTP) would be more vulnerable at the client.

(3) HOTP is much less commonly used. People are much less used to operate on HOTP and the chance of human error is not low. On Google Authenticator, once the current HOTP is displayed, it stays displayed for a long time unless the user switches out from the app, or manually click the refresh button. This would be problematic if the user is using the wallet on the computer, and using the authenticator on the phone. It is likely that the user cannot remember whether the current displayed code is used or not (keep in mind the user would have many other OTPs displayed on the same UI). The user would type in a code that is already used, only to find out it is not accepted, then the user would hit the refresh button and try again. This introduces non-trivial friction and annoyance which should be avoided.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding (1), I am not sure whether you are correct. It would be correct if you were using just one hash chain, which I explained in SmartOTPs. However, if you have a tree, it allows you to do an independent evaluation of each OTP (and its operation) regardless of others - it just requires 2 stages and saving already used IDs. The problem here might be the clutter for storing the IDs. Anyway, this we might clean up in some windows of e.g., 100 and do not allow operation IDs/EOTPs with lower index that the beginning of the current window.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(2) seems incorrect to me. The client needs to store (almost) the same hashes of EOTPs and nodes in the Merkle tree in both HOTPs and TOTPs for the purpose of constructing the proofs; so, then it makes no difference (or very little since TOTPs are expiring which on the other hand can be compensated by much lesser number of HOTPs).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(3), you are right. The users are definitely much more familiar with TOTPs, which are switched manually. However, in the case of HOTPs, the client can inform the user that OTP was already used and switch the next button (x times) if this happen. On the other, this is very rare situation. I'd expect that under normal circumstances the auto switching of GA will for just as good. It would be good to verify this in a user study and maybe cross compare the both solutions on two evaluation groups (one will get HOTPs first and then TOTPs wallet, while other group will do it vice versa).

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On (1), keep in mind we are still limited by the (Google) authenticator, which does not take variable input when it is generating code. The user may randomly refresh (hence increment counters) on the authenticator, hence introducing the out-of-sync problem, and force us to allow skipping to later EOTP (thus attacker only need to crack a later one, rather than being limited by a time window)

On (2) - you would be correct if we consider client as a whole - i.e. either all information at the client is exposed, or nothing is exposed. I think there should exist mechanisms which we can store majority of the information (e.g. leaves not relevant to current time) in a secure storage (e.g. a vault or enclave), and only load as minimally as possible for whatever is needed as the user executes an operation. The mechanism is still theoretical in browser, but it is quite practical if we venture into the realm of native applications (on either desktop or mobile).

}

// TODO (@polymorpher): during reveal, reject if index corresponds to a timestamp that is same as current block.timestamp, so we don't let the client accidentally leak EOTP that is still valid at the current time, which an attacker could use to commit and reveal a new transaction. This introduces a limitation that there would be a OTP_INTERVAL (=30 seconds by default) delay between commit and reveal, which translates to an unpleasant user experience. To fix this, we may slice OTP_INTERVAL further and use multiple EOTP per OTP_INTERVAL, and index the operations within an interval by an operation id. For a slice of size 16, we will need to generate a Merkle Tree 16 times as large. In this case, we would reject transactions only if both (1) the index corresponds to a timestamp that is same as current block.timestamp, and, (2) the operation id is not less than the id corresponding to the current timestamp's slice
// IH: what do you state above just supports HOTPs instead of TOTPs, besides other advantages.
// IH: combining index with nonce is unnecessary complicated to me. I would split them.
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Together, they represent one position (index) in the leaves. Also, it would save some bytes, since they are implicitly packed together in a uint32.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK, the nonce does not belong to the index of the tree. The variable is even called indexWithNonce, which indicates that there are two different things. So, I do not see too much value in merging them. For me, it is definitely harder to read and review.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay. I will name the variables better. The nonce does belong to the index in the tree. I should have updated the spec first to reduce confusion, but was running short of time to meet the development timeline. I will do it later in the weekend.

bytes memory packed = bytes.concat(packedNeighbors,
bytes32(bytes4(indexWithNonce)), eotp, bytes32(bytes20(address(dest))), bytes32(amount));
bytes32 commitHash = keccak256(bytes.concat(packed));
bytes32 commitHash = keccak256(bytes.concat(packed)); // IH: why do you need bytes.concat again?
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bytes.concat concatenates the bytes of each parameter together, without any padding. So we can check the parameters in the reveal call is same as what's committed before. bytes.concat is chosen over other functions because it is simple and perhaps is the easiest to replicate in javascript, whereas alternatives such as abi.encodePacked introduces padding and many other complicated behaviors which are not necessary for our use case.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, ok, but it feels a bit strange at the first glance. BTW, I am using abi.encodePacked and never had any problems with it. Also, note that you are working in memory, so potential gas saving for padding in EVM memory makes no sense since it is very cheap. You can cross compare with abi.encodePacked in terms of gas.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just did a test - concat costs slightly less gas (but the difference is really small). I am in favor of concat because it is more predictable and is much less complex. On Javascript side the bytes manipulation is also much simpler and cleaner than using external dependencies (web3 abi library)

require(!completed, "Commit is already completed");
// this should not happen (since old commit should be cleaned up already)
require(_isRevealTimely(ct), "Reveal is too late. Please re-commit");
require(_isRevealTimely(ct), "Reveal is too late. Please re-commit"); // IH: What might happen if we'd increase REVEAL_MAX_DELAY 10x or 100x? I do not see any advantage of keeping it short since MITM attacker gets only extended 32B OTP while attacker that tampers with the client can do anything regardless of this lenght. This brings me again to using HOTPs.
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would make the storage for commit very large when an attacker commits a lot of garbage. It is not much of a problem if it is kept temporary, since the attacker paid for it. But it must not be made permanent (or stays too long, in the case of making REVEAL_MAX_DELAY 10x or 100x), otherwise the size of the contract grows unbounded without producing additional value. In practice, the reveal should happen immediately after a commit. It is not reasonable for a client to wait for too long after committing an operation aside from bad network conditions.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's at the attacker's expense anyway. Therefore, the attacker is not incentivized to do it since he pays for it. And Harmony blockchain wants users who pays for txs. Hence, to me, growing blockchain is an unrelated issue - such attacker will not occur for sure (unless something is improperly designed - however, this we can influence).

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On a second thought, perhaps we should allow an "untimely" reveal. There are legitimate use cases for this in dApps: sign a transaction at first, execute it later. The smart contract wallet equivalent would be to commit (with signature) at first, reveal later.

@@ -263,7 +271,7 @@ contract ONEWallet {
nonces[index] = v + 1;
}

// same output as abi.encodePacked(x), but the behavior of abi.encodePacked is sort of uncertain
// same output as abi.encodePacked(x), but the behavior of abi.encodePacked is sort of uncertain - IH: this is surprising; in which sense?
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on the spec here https://docs.soliditylang.org/en/latest/abi-spec.html#non-standard-packed-mode , arrays may get padded with abi.encodePacked. This is not the case here, but I do not want to rely on this function since its behavior may change in the future. Our use case is pretty simple: given a list of parameters, convert them into 32-byte elements and concatenate them together. The whole abi encoding spec seems to be an overkill for this use case.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am sure that its behavior will not change. If anything like that would be about to happen, not in the same version of solidity, so it would not impact us. This function provides unambiguous concatenation and I've seen it in many other contracts.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's stick to simpler expression on both contract and Javascript sides for now. The Javascript side of the code for abi.encodePacked looks quite messy.

function revealTransfer(bytes32[] calldata neighbors, uint32 indexWithNonce, bytes32 eotp, address payable dest, uint256 amount) external
isCorrectProof(neighbors, indexWithNonce, eotp)
returns (bool)
{
bytes memory packedNeighbors = _pack(neighbors);
bytes memory packedNeighbors = _pack(neighbors); // IH: no need to put Merkle proof into the hash (it is either correct and matches to OTP or not, hence integrity is implicit)
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's true. However, if we remove the neighbors from the packed bytes computation, the only issue seems to be that an attacker could commit the exact same transaction ahead of the user, resulting the user receiving an annoying rejection message "commit already exists" when the user truly commits it. This may result in a bad user experience. Since only neighbors[0] is secret, we could keep that inside the packed instead of the whole neighbors. This way the attacker would have no way of committing the same transaction ahead of the user since they would not know the value of neighbors[0]

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This attack is unrealistic. Whoever commits hash(something, eotp), is just committing the right stuff - it can be even the attacker (who has no value of it). Then in reveal(), neighbors (representing the Merkle proof) related to eotp are passed and checked. Also, neigbour[0] is not secret since it is the hash of the sibling OTP. So, please check this.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right. I somehow thought EOTP would be the same for two different nonces. They are not. So EOTP alone is sufficient. But based on our discussion (outside this PR) we are going to rewrite the logic for commit-reveal anyway to cure one of the potential attacks.

Copy link

@hashmesan hashmesan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see the introduction of indexWithNonce. Maybe some comment and justification for this would be helpful. I don't understand it yet. And transfer will need to support other ERC-20 tokens in the future as well.

@polymorpher
Copy link
Owner Author

Okay. I will update the specs soon to properly document new concepts. Let's wrap up this review and move on to newer versions, since many things will change.

@polymorpher
Copy link
Owner Author

Let's reopen this when Ivan is back from holiday

@polymorpher polymorpher closed this Jul 6, 2021
HYDEAI pushed a commit to HYDEAI/one-wallet that referenced this pull request Aug 29, 2021
# This is the 1st commit message:
# This is a combination of 2 commits.
# This is the 1st commit message:

Started Apple Watch Otp Process.

# This is the commit message polymorpher#2:

Started Apple Watch Otp Process.

# This is the commit message polymorpher#2:

Started Apple Watch Otp Process.

# The commit message polymorpher#3 will be skipped:

#	Started Apple Watch Otp Process.
polymorpher added a commit that referenced this pull request Dec 24, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants