Skip to content

Conversation

@paulhauner
Copy link
Member

@paulhauner paulhauner commented Oct 21, 2020

Issue Addressed

Proposed Changes

Adds the following API endpoints:

  • GET lighthouse/eth1/syncing: status about how synced we are with Eth1.
  • GET lighthouse/eth1/block_cache: all locally cached eth1 blocks.
  • GET lighthouse/eth1/deposit_cache: all locally cached eth1 deposits.

Additionally:

  • Moves some types from the beacon_node/eth1 to the common/eth2 crate, so they can be used in the API without duplication.
  • Allow update_deposit_cache and update_block_cache to take an optional head block number to avoid duplicate requests.

Additional Info

TBC

@paulhauner paulhauner added the work-in-progress PR is a work-in-progress label Oct 21, 2020
@paulhauner paulhauner mentioned this pull request Oct 21, 2020
18 tasks
@paulhauner paulhauner added ready-for-review The code is ready for review and removed work-in-progress PR is a work-in-progress labels Oct 26, 2020
@paulhauner
Copy link
Member Author

Hey @pawanjay176 would you be interested in reviewing this? :)

@paulhauner paulhauner marked this pull request as ready for review October 26, 2020 04:13
Copy link
Member

@pawanjay176 pawanjay176 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. Just a few minor suggestions and questions :)

Apart from this, I had one minor peeve with the update_deposit_cache for a long time. Currently, the max_log_requests_per_update config param in eth1 service is always set to u64::max(). Because of this, when we start building the eth1 cache from scratch, the call to update_deposit_cache only returns after getting all logs upto current head. If even a single query to the eth1 node fails, we have to start over building the deposit cache.

I think it'd be good if we set max_log_requests_per_update to some smaller value so that we don't try to get all the logs with a single update_deposit_cache call.

type Eth1DataVoteCount = HashMap<(Eth1Data, BlockNumber), u64>;

/// We will declare ourself synced with the Eth1 chain, even if we are this many blocks behind.
const ETH1_SYNC_TOLERANCE: u64 = 8;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any specific reason for 8? Just curious

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope, no reason. I added a note to that measure :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, 8 just felt like a decent number to me. I can't back it up, though 😅

beacon node are ready for block production.
- This value might be set to
`false` whilst `eth1_node_sync_status_percentage == 100.0` if the beacon
node is still build its internal cache.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
node is still build its internal cache.
node is still building its internal cache.

}
```

### `/lighthouse/eth1/block_cache`
Copy link
Member

@pawanjay176 pawanjay176 Oct 27, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lighthouse/eth1/block_cache is already present above. This is a repetition

};

self.eth1_service = None;
self.eth1_service = Some(backend.core.clone());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any idea why we set this to None earlier? curious again :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm.. no I can't 😅

&self,
) -> Result<(DepositCacheUpdateOutcome, BlockCacheUpdateOutcome), String> {
let endpoint = &self.config().endpoint.clone();
let remote_head_block_number =
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could save a call to the eth1 node by using eth_getBlockByNumber with 'latest' in the params to get the block corresponding to remote head.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice one, I'll do this!

Copy link
Member Author

@paulhauner paulhauner Oct 29, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done here: b0d0373 :)

@paulhauner
Copy link
Member Author

All comments addressed!

@paulhauner
Copy link
Member Author

bors r+

@paulhauner paulhauner removed the ready-for-review The code is ready for review label Nov 2, 2020
@paulhauner paulhauner added the ready-for-merge This PR is ready to merge. label Nov 2, 2020
bors bot pushed a commit that referenced this pull request Nov 2, 2020
## Issue Addressed

- Related to #1691

## Proposed Changes

Adds the following API endpoints:

- `GET lighthouse/eth1/syncing`: status about how synced we are with Eth1.
- `GET lighthouse/eth1/block_cache`: all locally cached eth1 blocks.
- `GET lighthouse/eth1/deposit_cache`: all locally cached eth1 deposits.

Additionally:

- Moves some types from the `beacon_node/eth1` to the `common/eth2` crate, so they can be used in the API without duplication.
- Allow `update_deposit_cache` and `update_block_cache` to take an optional head block number to avoid duplicate requests.

## Additional Info

TBC
@bors
Copy link

bors bot commented Nov 2, 2020

@bors bors bot changed the title Return eth1-related data via the API [Merged by Bors] - Return eth1-related data via the API Nov 2, 2020
@bors bors bot closed this Nov 2, 2020
@paulhauner paulhauner deleted the eth1-sync-status branch January 20, 2021 00:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready-for-merge This PR is ready to merge.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants