New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Polygon zkEVM Bridge indexer and API v2 extension #9098
Conversation
0694f1c
to
7a48513
Compare
end | ||
end | ||
|
||
defp reorg_queue_get(table_name) do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like code duplication with Indexer.Fetcher.PolygonEdge
. Does it make sense to introduce a library for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Polygon Edge is not supported anymore by Polygon team and Blockscout doesn't have such instances. We are going to remove Polygon Edge modules from Blockscout soon, so I'd leave it as it is for now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is my initial comment still applicable because we have now Shibarium related code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moreover similar code exists in Optimism branch. Does it make sense to make a framework of re-orgs handling re-usable in all L2-chains?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I will move these functions to a separate module
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1877b04
to
6c5e1d8
Compare
bd2ca0b
to
d61c9d3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
667ddb1
to
b8ebd9e
Compare
@vbaranov Rebased |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide @doc
comments for every public function introduced. If a changed public function does not have @doc
comments it will be great if you add it there as well.
@@ -157,6 +158,11 @@ defmodule Indexer.Block.Fetcher do | |||
do: ShibariumBridge.parse(blocks, transactions_with_receipts, logs), | |||
else: [] | |||
), | |||
zkevm_bridge_operations = | |||
if(callback_module == Indexer.Block.Realtime.Fetcher, | |||
do: ZkevmBridge.parse(blocks, logs), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this call could make lots of RPC calls, it could delay the block importing, which could cause latency in the new block appearance on UI side. Consider to move logic of getting blocks timestamps and token data (both from DB and RPC) to an async worker (see async_import_remaining_block_data
of the realtime fetcher).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, done this in affe983
query = from(t in BridgeL1Token, select: {t.address, t.id}, where: t.address in ^addresses) | ||
|
||
query | ||
|> Repo.all(timeout: :infinity) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it make sense to move this to the Reader module?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, moved in f4f68ea
query = | ||
from( | ||
t in BridgeL1Token, | ||
where: t.address in ^token_addresses, | ||
select: {t.address, t.decimals, t.symbol} | ||
) | ||
|
||
token_data = | ||
query | ||
|> Repo.all() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it make sense to move this to the Reader module?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, moved in f4f68ea
token_addresses | ||
|> Enum.map(fn address -> | ||
# we will call symbol() and decimals() public getters | ||
Enum.map(["95d89b41", "313ce567"], fn method_id -> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it make sense to define the methods selectors as the module attributes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, done in f4f68ea
if status == :ok do | ||
response = parse_response(response) | ||
|
||
address = String.downcase(request.contract_address) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a function address_hash_to_string
in Indexer.Helper
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refactored in f4f68ea
query | ||
|> Repo.all() | ||
|> Enum.reduce(%{}, fn {address, decimals, symbol}, acc -> | ||
token_address = String.downcase(Hash.to_string(address)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a function address_hash_to_string
in Indexer.Helper
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refactored in f4f68ea
tokens_inserted_outside = token_addresses_to_ids_from_db(tokens_not_inserted) | ||
|
||
tokens_inserted | ||
|> Enum.reduce(%{}, fn t, acc -> Map.put(acc, Hash.to_string(t.address), t.id) end) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is Hash.to_string(<address>)
equal Helper.address_hash_to_string(<address>, false)
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, refactored in f4f68ea
tokens_not_inserted = | ||
tokens_to_insert | ||
|> Enum.reject(fn token -> | ||
Enum.any?(tokens_inserted, fn inserted -> token.address == Hash.to_string(inserted.address) end) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is Hash.to_string(<address>)
equal Helper.address_hash_to_string(<address>, false)
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, refactored in f4f68ea
26ff09b
to
affe983
Compare
@varasev, a general question about the approach to execute L1 RPC requests during the block fetcher job asynchronously. My understanding is that initiating bridge L1 messages happens first, and the corresponding L2 messages appear after some 'finalization' time, correct? So, if we have an independent fetcher ( |
Done |
@akolotov Yes, if you mean Deposit operation. For Withdrawals it's vice versa: L2 message first, then L1 one. For realtime case on L1:
For realtime case on L2:
For the realtime we have These token data RPC requests are not sent by |
Performs a specified number of retries (up to) if the first attempt returns error. | ||
""" | ||
@spec get_blocks_by_events(list(), list(), non_neg_integer()) :: list() | ||
def get_blocks_by_events(events, json_rpc_named_arguments, retries) do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is my understanding correct and if the event
list contains 200 records, in the worst case 200 of eth_getBlockByNumber
will be sent in one batch? Does it make sense to divide it on chunks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, added chunks in 95e522b
|> Helper.get_blocks_by_events(json_rpc_named_arguments, 100_000_000) | ||
|> Enum.reduce(%{}, fn block, acc -> | ||
block_number = quantity_to_integer(Map.get(block, "number")) | ||
{:ok, timestamp} = DateTime.from_unix(quantity_to_integer(Map.get(block, "timestamp"))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider to use EthereumJSONRPC.timestamp_to_datetime
here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done in c4ad14b
95e522b
to
bddcb3d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now, since we have INDEXER_POLYGON_EDGE_
and INDEXER_POLYGON_ZKEVM_
envs set families I would suggest change the structure of folders. It would be natural if everywhere instead polygon_edge
and zkevm
folders we'd have:
polygon
edge
zkevm
And I'd rename corresponging modules to have Polygon.Edge
, Polygon.Zkevm
.
…dule of the same rollup
c415462
to
357a7d1
Compare
I renamed Zkevm to PolygonZkevm: d8fd9b2 As PolygonEdge will be removed soon, I left it as it is |
Closes #8268.
Motivation
This PR adds an indexer for Polygon zkEVM Bridge operations (Deposits and Withdrawals) and extends API v2 for the corresponding views on UI.
Please note that this PR renames the following env variables:
INDEXER_ZKEVM_BATCHES_ENABLED ➡️ INDEXER_POLYGON_ZKEVM_BATCHES_ENABLED
INDEXER_ZKEVM_BATCHES_CHUNK_SIZE ➡️ INDEXER_POLYGON_ZKEVM_BATCHES_CHUNK_SIZE
INDEXER_ZKEVM_BATCHES_RECHECK_INTERVAL ➡️ INDEXER_POLYGON_ZKEVM_BATCHES_RECHECK_INTERVAL
Other env variables are added in blockscout/docs#217.
Checklist for your Pull Request (PR)
CHANGELOG.md
with this PRmaster
in the Version column. Changes will be reflected in this table: https://docs.blockscout.com/for-developers/information-and-settings/env-variables.