Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Bitlist and Bitvector #1224

Merged
merged 31 commits into from
Jun 28, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
02f6ba3
Add Bitvector and Bitlist
dankrad Jun 27, 2019
23c7435
Add some tests and fix pack
dankrad Jun 27, 2019
494984f
Fix linting errors
dankrad Jun 27, 2019
d641e94
Cleanups
JustinDrake Jun 27, 2019
67c50cb
Changed attestation and custody bitfields
dankrad Jun 27, 2019
becb7a0
justification_bitfield -> Bitvector[4]
dankrad Jun 27, 2019
80c680e
Phase 1 to Bitvector/Bitlist
dankrad Jun 27, 2019
f57387c
Justification bitvector length to constant
dankrad Jun 27, 2019
a5154da
suggestion to implement bitfield like
protolambda Jun 27, 2019
b574a58
Remove not working py-ssz decoder tests
dankrad Jun 27, 2019
8ed638b
Linter fixes
dankrad Jun 27, 2019
2cb23d3
Merge remote-tracking branch 'origin/bitfield-suggestion' into dankra…
dankrad Jun 27, 2019
afd86f7
Fixes in ssz impl
dankrad Jun 27, 2019
93ce168
More linting fixes
dankrad Jun 27, 2019
7adf07e
A few more tests for Bitvector/Bitlist
dankrad Jun 27, 2019
237b41d
Slice notation for justification_bitfield shift
dankrad Jun 27, 2019
2677d23
Some more (shorter) Bitvector and Bitlist tests
dankrad Jun 27, 2019
2622548
Merge remote-tracking branch 'origin/dev' into dankrad-patch-8
dankrad Jun 28, 2019
196ac42
Cleanup naming
JustinDrake Jun 28, 2019
6f9d374
Cleanups
JustinDrake Jun 28, 2019
e36593b
Add comment
JustinDrake Jun 28, 2019
05842f8
Update 0_beacon-chain.md
JustinDrake Jun 28, 2019
5ff13dd
be explicit about limiting for HTR and chunk padding
protolambda Jun 28, 2019
128bbbc
fix slicing, and support partial slice bounds
protolambda Jun 28, 2019
25db397
fix line length lint problem in checkpoint
protolambda Jun 28, 2019
5f0e583
resolved merge conflicts, take attesters seq->set change from dev, ta…
protolambda Jun 28, 2019
fa84c49
Update specs/core/0_beacon-chain.md
dankrad Jun 28, 2019
6a2d2c8
Bitlist for attestation doc
dankrad Jun 28, 2019
4dcb47e
Update test_libs/pyspec/eth2spec/test/phase_0/block_processing/test_p…
dankrad Jun 28, 2019
be04eb2
Change copy style, and remove deepcopy import
dankrad Jun 28, 2019
4f31207
reword merkleize with limit / length
protolambda Jun 28, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
12 changes: 6 additions & 6 deletions scripts/build_spec.py
Expand Up @@ -25,8 +25,8 @@
signing_root,
)
from eth2spec.utils.ssz.ssz_typing import (
Bit, Bool, Container, List, Vector, Bytes, uint64,
Bytes4, Bytes32, Bytes48, Bytes96,
bit, boolean, Container, List, Vector, uint64,
Bytes4, Bytes32, Bytes48, Bytes96, Bitlist, Bitvector,
)
from eth2spec.utils.bls import (
bls_aggregate_pubkeys,
Expand All @@ -52,8 +52,8 @@
is_empty,
)
from eth2spec.utils.ssz.ssz_typing import (
Bit, Bool, Container, List, Vector, Bytes, uint64,
Bytes4, Bytes32, Bytes48, Bytes96,
bit, boolean, Container, List, Vector, Bytes, uint64,
Bytes4, Bytes32, Bytes48, Bytes96, Bitlist, Bitvector,
)
from eth2spec.utils.bls import (
bls_aggregate_pubkeys,
Expand Down Expand Up @@ -174,8 +174,8 @@ def combine_constants(old_constants: Dict[str, str], new_constants: Dict[str, st


ignored_dependencies = [
'Bit', 'Bool', 'Vector', 'List', 'Container', 'Hash', 'BLSPubkey', 'BLSSignature', 'Bytes', 'BytesN'
'Bytes4', 'Bytes32', 'Bytes48', 'Bytes96',
'bit', 'boolean', 'Vector', 'List', 'Container', 'Hash', 'BLSPubkey', 'BLSSignature', 'Bytes', 'BytesN'
'Bytes4', 'Bytes32', 'Bytes48', 'Bytes96', 'Bitlist', 'Bitvector',
'uint8', 'uint16', 'uint32', 'uint64', 'uint128', 'uint256',
'bytes' # to be removed after updating spec doc
]
Expand Down
94 changes: 32 additions & 62 deletions specs/core/0_beacon-chain.md
Expand Up @@ -81,8 +81,6 @@
- [`bytes_to_int`](#bytes_to_int)
- [`get_total_balance`](#get_total_balance)
- [`get_domain`](#get_domain)
- [`get_bitfield_bit`](#get_bitfield_bit)
- [`verify_bitfield`](#verify_bitfield)
- [`convert_to_indexed`](#convert_to_indexed)
- [`validate_indexed_attestation`](#validate_indexed_attestation)
- [`is_slashable_attestation_data`](#is_slashable_attestation_data)
Expand Down Expand Up @@ -192,6 +190,7 @@ The following values are (non-configurable) constants used throughout the specif
| `MIN_PER_EPOCH_CHURN_LIMIT` | `2**2` (= 4) |
| `CHURN_LIMIT_QUOTIENT` | `2**16` (= 65,536) |
| `SHUFFLE_ROUND_COUNT` | `90` |
| `JUSTIFICATION_BITS_LENGTH` | `4` |

* For the safety of crosslinks, `TARGET_COMMITTEE_SIZE` exceeds [the recommended minimum committee size of 111](https://vitalik.ca/files/Ithaca201807_Sharding.pdf); with sufficient active validators (at least `SLOTS_PER_EPOCH * TARGET_COMMITTEE_SIZE`), the shuffling algorithm ensures committee sizes of at least `TARGET_COMMITTEE_SIZE`. (Unbiasable randomness with a Verifiable Delay Function (VDF) will improve committee robustness and lower the safe minimum committee size.)

Expand Down Expand Up @@ -306,7 +305,7 @@ class Validator(Container):
pubkey: BLSPubkey
withdrawal_credentials: Hash # Commitment to pubkey for withdrawals and transfers
effective_balance: Gwei # Balance at stake
slashed: Bool
slashed: boolean
# Status epochs
activation_eligibility_epoch: Epoch # When criteria for activation were met
activation_epoch: Epoch
Expand Down Expand Up @@ -344,7 +343,7 @@ class AttestationData(Container):
```python
class AttestationDataAndCustodyBit(Container):
data: AttestationData
custody_bit: Bit # Challengeable bit (SSZ-bool, 1 byte) for the custody of crosslink data
custody_bit: bit # Challengeable bit (SSZ-bool, 1 byte) for the custody of crosslink data
```

#### `IndexedAttestation`
Expand All @@ -361,7 +360,7 @@ class IndexedAttestation(Container):

```python
class PendingAttestation(Container):
aggregation_bitfield: Bytes[MAX_INDICES_PER_ATTESTATION // 8]
aggregation_bits: Bitlist[MAX_INDICES_PER_ATTESTATION]
data: AttestationData
inclusion_delay: Slot
proposer_index: ValidatorIndex
Expand Down Expand Up @@ -428,9 +427,9 @@ class AttesterSlashing(Container):

```python
class Attestation(Container):
aggregation_bitfield: Bytes[MAX_INDICES_PER_ATTESTATION // 8]
aggregation_bits: Bitlist[MAX_INDICES_PER_ATTESTATION]
data: AttestationData
custody_bitfield: Bytes[MAX_INDICES_PER_ATTESTATION // 8]
custody_bits: Bitlist[MAX_INDICES_PER_ATTESTATION]
signature: BLSSignature
```

Expand Down Expand Up @@ -528,7 +527,7 @@ class BeaconState(Container):
previous_crosslinks: Vector[Crosslink, SHARD_COUNT] # Previous epoch snapshot
current_crosslinks: Vector[Crosslink, SHARD_COUNT]
# Finality
justification_bitfield: uint64 # Bit set for every recent justified epoch
justification_bits: Bitvector[JUSTIFICATION_BITS_LENGTH] # Bit set for every recent justified epoch
previous_justified_checkpoint: Checkpoint # Previous epoch snapshot
current_justified_checkpoint: Checkpoint
finalized_checkpoint: Checkpoint
Expand Down Expand Up @@ -866,13 +865,14 @@ def get_crosslink_committee(state: BeaconState, epoch: Epoch, shard: Shard) -> S
### `get_attesting_indices`

```python
def get_attesting_indices(state: BeaconState, data: AttestationData, bitfield: bytes) -> Set[ValidatorIndex]:
def get_attesting_indices(state: BeaconState,
data: AttestationData,
bits: Bitlist[MAX_INDICES_PER_ATTESTATION]) -> Set[ValidatorIndex]:
"""
Return the set of attesting indices corresponding to ``data`` and ``bitfield``.
"""
committee = get_crosslink_committee(state, data.target.epoch, data.crosslink.shard)
assert verify_bitfield(bitfield, len(committee))
return set(index for i, index in enumerate(committee) if get_bitfield_bit(bitfield, i) == 0b1)
return set(index for i, index in enumerate(committee) if bits[i])
```

### `int_to_bytes`
Expand Down Expand Up @@ -913,43 +913,15 @@ def get_domain(state: BeaconState,
return bls_domain(domain_type, fork_version)
```

### `get_bitfield_bit`

```python
def get_bitfield_bit(bitfield: bytes, i: int) -> int:
"""
Extract the bit in ``bitfield`` at position ``i``.
"""
return (bitfield[i // 8] >> (i % 8)) % 2
```

### `verify_bitfield`

```python
def verify_bitfield(bitfield: bytes, committee_size: int) -> bool:
"""
Verify ``bitfield`` against the ``committee_size``.
"""
if len(bitfield) != (committee_size + 7) // 8:
return False

# Check `bitfield` is padded with zero bits only
for i in range(committee_size, len(bitfield) * 8):
if get_bitfield_bit(bitfield, i) == 0b1:
return False

return True
```

### `convert_to_indexed`

```python
def convert_to_indexed(state: BeaconState, attestation: Attestation) -> IndexedAttestation:
"""
Convert ``attestation`` to (almost) indexed-verifiable form.
"""
attesting_indices = get_attesting_indices(state, attestation.data, attestation.aggregation_bitfield)
custody_bit_1_indices = get_attesting_indices(state, attestation.data, attestation.custody_bitfield)
attesting_indices = get_attesting_indices(state, attestation.data, attestation.aggregation_bits)
custody_bit_1_indices = get_attesting_indices(state, attestation.data, attestation.custody_bits)
assert custody_bit_1_indices.issubset(attesting_indices)
custody_bit_0_indices = attesting_indices.difference(custody_bit_1_indices)

Expand Down Expand Up @@ -1283,7 +1255,7 @@ def get_unslashed_attesting_indices(state: BeaconState,
attestations: Sequence[PendingAttestation]) -> Set[ValidatorIndex]:
output = set() # type: Set[ValidatorIndex]
for a in attestations:
output = output.union(get_attesting_indices(state, a.data, a.aggregation_bitfield))
output = output.union(get_attesting_indices(state, a.data, a.aggregation_bits))
return set(filter(lambda index: not state.validators[index].slashed, list(output)))
```

Expand Down Expand Up @@ -1323,34 +1295,32 @@ def process_justification_and_finalization(state: BeaconState) -> None:

# Process justifications
state.previous_justified_checkpoint = state.current_justified_checkpoint
state.justification_bitfield = (state.justification_bitfield << 1) % 2**64
previous_epoch_matching_target_balance = get_attesting_balance(
state, get_matching_target_attestations(state, previous_epoch)
)
if previous_epoch_matching_target_balance * 3 >= get_total_active_balance(state) * 2:
state.justification_bits[1:] = state.justification_bits[:-1]
state.justification_bits[0] = 0b0
matching_target_attestations = get_matching_target_attestations(state, previous_epoch) # Previous epoch
if get_attesting_balance(state, matching_target_attestations) * 3 >= get_total_active_balance(state) * 2:
state.current_justified_checkpoint = Checkpoint(epoch=previous_epoch,
root=get_block_root(state, previous_epoch))
state.justification_bitfield |= (1 << 1)
current_epoch_matching_target_balance = get_attesting_balance(
state, get_matching_target_attestations(state, current_epoch)
)
if current_epoch_matching_target_balance * 3 >= get_total_active_balance(state) * 2:
state.current_justified_checkpoint = Checkpoint(epoch=current_epoch, root=get_block_root(state, current_epoch))
state.justification_bitfield |= (1 << 0)
state.justification_bits[1] = 0b1
matching_target_attestations = get_matching_target_attestations(state, current_epoch) # Current epoch
if get_attesting_balance(state, matching_target_attestations) * 3 >= get_total_active_balance(state) * 2:
state.current_justified_checkpoint = Checkpoint(epoch=current_epoch,
root=get_block_root(state, current_epoch))
state.justification_bits[0] = 0b1

# Process finalizations
bitfield = state.justification_bitfield
bits = state.justification_bits
# The 2nd/3rd/4th most recent epochs are justified, the 2nd using the 4th as source
if (bitfield >> 1) % 8 == 0b111 and old_previous_justified_checkpoint.epoch + 3 == current_epoch:
if all(bits[1:4]) and old_previous_justified_checkpoint.epoch + 3 == current_epoch:
state.finalized_checkpoint = old_previous_justified_checkpoint
# The 2nd/3rd most recent epochs are justified, the 2nd using the 3rd as source
if (bitfield >> 1) % 4 == 0b11 and old_previous_justified_checkpoint.epoch + 2 == current_epoch:
if all(bits[1:3]) and old_previous_justified_checkpoint.epoch + 2 == current_epoch:
state.finalized_checkpoint = old_previous_justified_checkpoint
# The 1st/2nd/3rd most recent epochs are justified, the 1st using the 3rd as source
if (bitfield >> 0) % 8 == 0b111 and old_current_justified_checkpoint.epoch + 2 == current_epoch:
if all(bits[0:3]) and old_current_justified_checkpoint.epoch + 2 == current_epoch:
state.finalized_checkpoint = old_current_justified_checkpoint
# The 1st/2nd most recent epochs are justified, the 1st using the 2nd as source
if (bitfield >> 0) % 4 == 0b11 and old_current_justified_checkpoint.epoch + 1 == current_epoch:
if all(bits[0:2]) and old_current_justified_checkpoint.epoch + 1 == current_epoch:
state.finalized_checkpoint = old_current_justified_checkpoint
```

Expand Down Expand Up @@ -1406,7 +1376,7 @@ def get_attestation_deltas(state: BeaconState) -> Tuple[Sequence[Gwei], Sequence
index = ValidatorIndex(index)
attestation = min([
a for a in matching_source_attestations
if index in get_attesting_indices(state, a.data, a.aggregation_bitfield)
if index in get_attesting_indices(state, a.data, a.aggregation_bits)
], key=lambda a: a.inclusion_delay)
proposer_reward = Gwei(get_base_reward(state, index) // PROPOSER_REWARD_QUOTIENT)
rewards[attestation.proposer_index] += proposer_reward
Expand Down Expand Up @@ -1677,7 +1647,7 @@ def process_attestation(state: BeaconState, attestation: Attestation) -> None:

pending_attestation = PendingAttestation(
data=data,
aggregation_bitfield=attestation.aggregation_bitfield,
aggregation_bits=attestation.aggregation_bits,
inclusion_delay=state.slot - attestation_slot,
proposer_index=get_beacon_proposer_index(state),
)
Expand All @@ -1696,7 +1666,7 @@ def process_attestation(state: BeaconState, attestation: Attestation) -> None:
assert data.crosslink.start_epoch == parent_crosslink.end_epoch
assert data.crosslink.end_epoch == min(data.target.epoch, parent_crosslink.end_epoch + MAX_EPOCHS_PER_CROSSLINK)
assert data.crosslink.data_root == ZERO_HASH # [to be removed in phase 1]

# Check signature
validate_indexed_attestation(state, convert_to_indexed(state, attestation))
```
Expand Down
32 changes: 21 additions & 11 deletions specs/core/1_custody-game.md
Expand Up @@ -272,22 +272,32 @@ def get_custody_chunk_count(crosslink: Crosslink) -> int:
return crosslink_length * chunks_per_epoch
```

### `get_bit`

```python
def get_bit(serialization: bytes, i: int) -> int:
"""
Extract the bit in ``serialization`` at position ``i``.
"""
return (serialization[i // 8] >> (i % 8)) % 2
```

### `get_custody_chunk_bit`

```python
def get_custody_chunk_bit(key: BLSSignature, chunk: bytes) -> bool:
# TODO: Replace with something MPC-friendly, e.g. the Legendre symbol
return bool(get_bitfield_bit(hash(key + chunk), 0))
return bool(get_bit(hash(key + chunk), 0))
```

### `get_chunk_bits_root`

```python
def get_chunk_bits_root(chunk_bitfield: bytes) -> Bytes32:
def get_chunk_bits_root(chunk_bits: bytes) -> Bytes32:
aggregated_bits = bytearray([0] * 32)
for i in range(0, len(chunk_bitfield), 32):
for i in range(0, len(chunk_bits), 32):
for j in range(32):
aggregated_bits[j] ^= chunk_bitfield[i + j]
aggregated_bits[j] ^= chunk_bits[i + j]
return hash(aggregated_bits)
```

Expand Down Expand Up @@ -469,7 +479,7 @@ def process_chunk_challenge(state: BeaconState, challenge: CustodyChunkChallenge
responder = state.validators[challenge.responder_index]
assert responder.exit_epoch >= get_current_epoch(state) - MAX_CHUNK_CHALLENGE_DELAY
# Verify the responder participated in the attestation
attesters = get_attesting_indices(state, challenge.attestation.data, challenge.attestation.aggregation_bitfield)
attesters = get_attesting_indices(state, challenge.attestation.data, challenge.attestation.aggregation_bits)
assert challenge.responder_index in attesters
# Verify the challenge is not a duplicate
for record in state.custody_chunk_challenge_records:
Expand Down Expand Up @@ -520,8 +530,9 @@ def process_bit_challenge(state: BeaconState, challenge: CustodyBitChallenge) ->
# Verify attestation is eligible for challenging
responder = state.validators[challenge.responder_index]
assert epoch + responder.max_reveal_lateness <= get_reveal_period(state, challenge.responder_index)
# Verify responder participated in the attestation
attesters = get_attesting_indices(state, attestation.data, attestation.aggregation_bitfield)

# Verify the responder participated in the attestation
attesters = get_attesting_indices(state, attestation.data, attestation.aggregation_bits)
assert challenge.responder_index in attesters
# Verifier challenger is not already challenging
for record in state.custody_bit_challenge_records:
Expand All @@ -535,11 +546,10 @@ def process_bit_challenge(state: BeaconState, challenge: CustodyBitChallenge) ->
assert bls_verify(responder.pubkey, hash_tree_root(epoch_to_sign), challenge.responder_key, domain)
# Verify the chunk count
chunk_count = get_custody_chunk_count(attestation.data.crosslink)
assert verify_bitfield(challenge.chunk_bits, chunk_count)
# Verify the first bit of the hash of the chunk bits does not equal the custody bit
committee = get_crosslink_committee(state, epoch, shard)
custody_bit = get_bitfield_bit(attestation.custody_bitfield, committee.index(challenge.responder_index))
assert custody_bit != get_bitfield_bit(get_chunk_bits_root(challenge.chunk_bits), 0)
custody_bit = attestation.custody_bits[committee.index(challenge.responder_index)]
assert custody_bit != get_bit(get_chunk_bits_root(challenge.chunk_bits), 0)
# Add new bit challenge record
new_record = CustodyBitChallengeRecord(
challenge_index=state.custody_challenge_index,
Expand Down Expand Up @@ -631,7 +641,7 @@ def process_bit_challenge_response(state: BeaconState,
)
# Verify the chunk bit does not match the challenge chunk bit
assert (get_custody_chunk_bit(challenge.responder_key, response.chunk)
!= get_bitfield_bit(challenge.chunk_bits_leaf, response.chunk_index % 256))
!= get_bit(challenge.chunk_bits_leaf, response.chunk_index % 256))
# Clear the challenge
records = state.custody_bit_challenge_records
records[records.index(challenge)] = CustodyBitChallengeRecord()
Expand Down
5 changes: 2 additions & 3 deletions specs/core/1_shard-data-chains.md
Expand Up @@ -92,7 +92,7 @@ class ShardAttestation(Container):
slot: Slot
shard: Shard
shard_block_root: Bytes32
aggregation_bitfield: Bytes[PLACEHOLDER]
aggregation_bits: Bitlist[PLACEHOLDER]
aggregate_signature: BLSSignature
```

Expand Down Expand Up @@ -230,10 +230,9 @@ def verify_shard_attestation_signature(state: BeaconState,
attestation: ShardAttestation) -> None:
data = attestation.data
persistent_committee = get_persistent_committee(state, data.shard, data.slot)
assert verify_bitfield(attestation.aggregation_bitfield, len(persistent_committee))
pubkeys = []
for i, index in enumerate(persistent_committee):
if get_bitfield_bit(attestation.aggregation_bitfield, i) == 0b1:
if attestation.aggregation_bits[i]:
validator = state.validators[index]
assert is_active_validator(validator, get_current_epoch(state))
pubkeys.append(validator.pubkey)
Expand Down
8 changes: 4 additions & 4 deletions specs/light_client/sync_protocol.md
Expand Up @@ -168,7 +168,7 @@ If a client wants to update its `finalized_header` it asks the network for a `Bl
{
'header': BeaconBlockHeader,
'shard_aggregate_signature': BLSSignature,
'shard_bitfield': 'bytes',
'shard_bits': Bitlist[PLACEHOLDER],
'shard_parent_block': ShardBlock,
}
```
Expand All @@ -180,13 +180,13 @@ def verify_block_validity_proof(proof: BlockValidityProof, validator_memory: Val
assert proof.shard_parent_block.beacon_chain_root == hash_tree_root(proof.header)
committee = compute_committee(proof.header, validator_memory)
# Verify that we have >=50% support
support_balance = sum([v.effective_balance for i, v in enumerate(committee) if get_bitfield_bit(proof.shard_bitfield, i) is True])
support_balance = sum([v.effective_balance for i, v in enumerate(committee) if proof.shard_bits[i]])
total_balance = sum([v.effective_balance for i, v in enumerate(committee)])
assert support_balance * 2 > total_balance
# Verify shard attestations
group_public_key = bls_aggregate_pubkeys([
v.pubkey for v, index in enumerate(committee)
if get_bitfield_bit(proof.shard_bitfield, index) is True
if proof.shard_bits[index]
])
assert bls_verify(
pubkey=group_public_key,
Expand All @@ -196,4 +196,4 @@ def verify_block_validity_proof(proof: BlockValidityProof, validator_memory: Val
)
```

The size of this proof is only 200 (header) + 96 (signature) + 16 (bitfield) + 352 (shard block) = 664 bytes. It can be reduced further by replacing `ShardBlock` with `MerklePartial(lambda x: x.beacon_chain_root, ShardBlock)`, which would cut off ~220 bytes.
The size of this proof is only 200 (header) + 96 (signature) + 16 (bits) + 352 (shard block) = 664 bytes. It can be reduced further by replacing `ShardBlock` with `MerklePartial(lambda x: x.beacon_chain_root, ShardBlock)`, which would cut off ~220 bytes.