Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add mandatory deposit index ordering #594

Merged
merged 3 commits into from
Feb 9, 2019
Merged
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
5 changes: 5 additions & 0 deletions specs/core/0_beacon-chain.md
Original file line number Diff line number Diff line change
Expand Up @@ -526,6 +526,7 @@ The following data structures are defined as [SimpleSerialize (SSZ)](https://git
# Ethereum 1.0 chain data
'latest_eth1_data': Eth1Data,
'eth1_data_votes': [Eth1DataVote],
'deposit_count': 'uint64'
djrtwo marked this conversation as resolved.
Show resolved Hide resolved
}
```

Expand Down Expand Up @@ -1478,6 +1479,7 @@ def get_initial_beacon_state(initial_validator_deposits: List[Deposit],
# Ethereum 1.0 chain data
latest_eth1_data=latest_eth1_data,
eth1_data_votes=[],
deposit_index=0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should default to len(initial_validator_deposits)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Though note that all this introduces one issue: if there are bad deposits before launch, then there will be a bunch of deposits that have to be reprocessed before any new deposits can be processed. I guess it's fine because they'll all fail due to the pubkey uniqueness check?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so. This will ensure that we don't have to attempt to process any failed deposits -- thus starting from the next deposit after the initial.

I'm assuming that initial_validator_deposits includes invalid ones that will just get skipped over in process_deposit below

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or am I missing something

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aah I see, if initial_validator_deposits includes invalid ones then there's no issue.

)

# Process initial deposits
Expand Down Expand Up @@ -1717,6 +1719,7 @@ Verify that `len(block.body.deposits) <= MAX_DEPOSITS`.
For each `deposit` in `block.body.deposits`:

* Let `serialized_deposit_data` be the serialized form of `deposit.deposit_data`. It should be 8 bytes for `deposit_data.amount` followed by 8 bytes for `deposit_data.timestamp` and then the `DepositInput` bytes. That is, it should match `deposit_data` in the [Ethereum 1.0 deposit contract](#ethereum-10-deposit-contract) of which the hash was placed into the Merkle tree.
* Verify that `deposit.index == state.deposit_index`.
* Verify that `verify_merkle_branch(hash(serialized_deposit_data), deposit.branch, DEPOSIT_CONTRACT_TREE_DEPTH, deposit.index, state.latest_eth1_data.deposit_root)` is `True`.

```python
Expand Down Expand Up @@ -1745,6 +1748,8 @@ process_deposit(
)
```

* Set `state.deposit_index += 1`.

##### Exits

Verify that `len(block.body.exits) <= MAX_EXITS`.
Expand Down