Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing ranges collector #6583

Merged
merged 5 commits into from
Dec 26, 2022
Merged

Missing ranges collector #6583

merged 5 commits into from
Dec 26, 2022

Conversation

Qwerty5Uiop
Copy link
Collaborator

@Qwerty5Uiop Qwerty5Uiop commented Dec 13, 2022

Resolves #6566

Motivation

Currently, catchup indexer tries to fetch all missing block ranges at the start. That can cause performance issues if there is a large amount of total blocks.

Changelog

Moved missing ranges tracking to a separate GenServer and split missed ranges query into batches.
Docs update: blockscout/docs#100

Copy link
Member

@vbaranov vbaranov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Qwerty5Uiop if to set FIRST_BLOCK and LAST_BLOCK in order to index specific blocks range or a single block, they are ignored and index starts for the whole range of blocks.

@vbaranov vbaranov merged commit 91c8326 into master Dec 26, 2022
@vbaranov vbaranov deleted the missing-ranges-collector branch December 26, 2022 13:29
@ericlee42
Copy link
Contributor

Hi, I'm using this patch, and block_catchup fetcher doesn't work any more, it can't index any new blocks

2022-12-30T13:00:58.951 application=indexer fetcher=block_catchup missing_block_count=0 shrunk=false [info] Index already caught up.

@vbaranov
Copy link
Member

@ericlee42 could elaborate a bit more more? Do you see new blocks appear in the DB? This fetcher should restart from the head of the chain once all current ranges have been processed. @Qwerty5Uiop please correct me if Iam wrong. @ericlee42 do you see the same behavious with the latest master branch?

@ericlee42
Copy link
Contributor

No, it doesn't work.

 SELECT COUNT(1) FROM blocks;

the result have been unchanged

@vbaranov
Copy link
Member

@ericlee42 can you reproduce this bug with the latest master branch?

@ericlee42
Copy link
Contributor

yes, i'm using latest master commit.

@vbaranov
Copy link
Member

Are you using any prod chain in order to reproduce the issue? or some testing environment? It would help us a lot, if you can provide us details for reproduction.

@ericlee42
Copy link
Contributor

it's running on testing environment.

I have tested with last commit from this patch and it works.

so I guess there should be some bugs here.

@ericlee42
Copy link
Contributor

FYI, I'm using this config

DATABASE_URL=xxxx
ETHEREUM_JSONRPC_HTTP_URL=http://xxxx:8545
ETHEREUM_JSONRPC_WS_URL=ws://xxx:8546

POOL_SIZE=200
ETHEREUM_JSONRPC_DEBUG_TRACE_TRANSACTION_TIMEOUT=20m
INDEXER_MEMORY_LIMIT=4
INDEXER_INTERNAL_TRANSACTIONS_BATCH_SIZE=1
INDEXER_INTERNAL_TRANSACTIONS_CONCURRENCY=20

ETHEREUM_JSONRPC_VARIANT=geth
DISABLE_WEBAPP=true
INDEXER_DISABLE_PENDING_TRANSACTIONS_FETCHER=true
INDEXER_DISABLE_BLOCK_REWARD_FETCHER=true
INDEXER_DISABLE_EMPTY_BLOCK_SANITIZER=true
ECTO_USE_SSL=false

ETHEREUM_JSONRPC_HTTP_INSECURE=true

@vbaranov
Copy link
Member

So, you're using latest master branch of this commit 41474d5, the chain is producing new blocks, and new blocks do not apper in the Blockscout DB, right? Can you provide logs from the Blockscout instance? Which type of node are you using?

@ericlee42
Copy link
Contributor

chain is working. and I'm using geth.

I don't think log is helpful

2022-12-30T13:40:09.140 application=indexer fetcher=block_catchup missing_block_count=0 shrunk=false [info] Index already caught up.
2022-12-30T13:40:09.140 application=indexer fetcher=block_catchup [info] Checking if index needs to catch up in 25000ms.
2022-12-30T13:40:10.457 application=indexer fetcher=internal_transaction count=1 error_count=1 [error] failed to fetch internal transactions for blocks: :closed
2022-12-30T13:40:19.795 application=plug request_id=FzWV3pGK7SneUkwAFYEE [info] GET /api/v1/health
2022-12-30T13:40:19.796 application=plug request_id=FzWV3pGK7SneUkwAFYEE [info] Sent 500 in 1ms
2022-12-30T13:40:19.803 application=plug request_id=FzWV3pILtpyNGG8AFYEk [info] GET /api/v1/health
2022-12-30T13:40:19.804 application=plug request_id=FzWV3pILtpyNGG8AFYEk [info] Sent 500 in 1ms
2022-12-30T13:40:19.912 application=plug request_id=FzWV3piNawv-xjkAFnlj [info] GET /api/v1/health
2022-12-30T13:40:19.914 application=plug request_id=FzWV3piNawv-xjkAFnlj [info] Sent 500 in 1ms
2022-12-30T13:40:34.142 application=indexer fetcher=block_catchup missing_block_count=0 shrunk=false [info] Index already caught up.
2022-12-30T13:40:34.142 application=indexer fetcher=block_catchup [info] Checking if index needs to catch up in 25000ms.
2022-12-30T13:40:34.810 application=plug request_id=FzWV4hCFMooIayoAFpyD [info] GET /api/v1/health
2022-12-30T13:40:34.811 application=plug request_id=FzWV4hCFMooIayoAFpyD [info] Sent 500 in 1ms
2022-12-30T13:40:34.817 application=plug request_id=FzWV4hDwQpuiWTEAFpyj [info] GET /api/v1/health
2022-12-30T13:40:34.818 application=plug request_id=FzWV4hDwQpuiWTEAFpyj [info] Sent 500 in 1ms
2022-12-30T13:40:34.924 application=plug request_id=FzWV4hdUs5gK3d8AF3vh [info] GET /api/v1/health
2022-12-30T13:40:34.925 application=plug request_id=FzWV4hdUs5gK3d8AF3vh [info] Sent 500 in 1ms
2022-12-30T13:40:49.823 application=plug request_id=FzWV5Y9cvjkZRFkAF5eh [info] GET /api/v1/health
2022-12-30T13:40:49.824 application=plug request_id=FzWV5Y9cvjkZRFkAF5eh [info] Sent 500 in 1ms
2022-12-30T13:40:49.832 application=plug request_id=FzWV5Y_uBN0v3EUAFlxC [info] GET /api/v1/health
2022-12-30T13:40:49.834 application=plug request_id=FzWV5Y_uBN0v3EUAFlxC [info] Sent 500 in 1ms
2022-12-30T13:40:49.940 application=plug request_id=FzWV5ZZWPryTvpcAFl2i [info] GET /api/v1/health
2022-12-30T13:40:49.941 application=plug request_id=FzWV5ZZWPryTvpcAFl2i [info] Sent 500 in 1ms
2022-12-30T13:40:59.145 application=indexer fetcher=block_catchup missing_block_count=0 shrunk=false [info] Index already caught up.
2022-12-30T13:40:59.145 application=indexer fetcher=block_catchup [info] Checking if index needs to catch up in 25000ms.

@vbaranov
Copy link
Member

Could you please post the response from

select * from missing_block_ranges;

?

@ericlee42
Copy link
Contributor

 select * from missing_block_ranges;
 id | from_number | to_number
----+-------------+-----------
(0 rows)

@vbaranov
Copy link
Member

It looks like, ranges collector didn't start at all. If unset DISABLE_WEBAPP, will change the behaviour?

@ericlee42
Copy link
Contributor

nope, still not working

@Qwerty5Uiop
Copy link
Collaborator Author

Hi @ericlee42! Yes, I found a bug in this functionality for the first launch on empty database. I will fix it ASAP, but for now, can you please run the app one more time and check if the problem is gone?

@ericlee42
Copy link
Contributor

No, I have restarted the explorer many times

@ericlee42
Copy link
Contributor

Hello? do you have a plan to fix it? @Qwerty5Uiop

@Qwerty5Uiop
Copy link
Collaborator Author

Hello @ericlee42! Sorry for the delay. Could you try #6687 and check if it works now?

@ericlee42
Copy link
Contributor

Okay, lemme try it out

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Rewrite block fetcher missing blocks query to work in chunks
3 participants