Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cant sync fresh node from snapshot #2231

Closed
hdiass opened this issue Apr 12, 2024 · 63 comments
Closed

Cant sync fresh node from snapshot #2231

hdiass opened this issue Apr 12, 2024 · 63 comments

Comments

@hdiass
Copy link

hdiass commented Apr 12, 2024

Describe the bug
Cant sync fresh node from snapshot. v2.3.3-6a1c1a7
Yesterday booted a node from scratch using snapshot and it can't sync.

INFO [04-12|11:53:43.146] latest assertion not yet in our node     staker=0x0000000000000000000000000000000000000000 assertion=13127 state="{BlockHash:0x917091e908d7fb9e281c53a1acec0cee529ec13dbebb0542375dfb95b0832333 SendRoot:0x984a1d7bf690b0b94ed7a951acdf3c7a5057b0b6b21f7103d3fb9d61f79e3027 Batch:586484 PosInBatch:89}"
INFO [04-12|11:53:44.100] catching up to chain batches             localBatches=582,290 target=586,485
WARN [04-12|11:54:04.284] error reading inbox                      err="failed to get blobs: error fetching blobs in 19501975 l1 block: expected at least 6 blobs for slot 8702476 but only got 0"

Eth client
geth v1.13.14
prysm v5.0.3
using checkpoint sync and
--enable-experimental-backfill

nitro args used

 - --persistent.chain=/database/
    - --parent-chain.blob-client.beacon-url=https://eth-mainnet-beacon
    - --http.port=8545
    - --http.api=net,web3,eth
    - --http.corsdomain=*
    - --http.addr=0.0.0.0
    - --http.vhosts=*
    - --ws.port=8546
    - --ws.addr=0.0.0.0
    - --ws.origins=*
    - --execution.rpc.gas-cap=0
    - --execution.rpc.tx-fee-cap=0
    - --metrics
    - --metrics-server.addr=0.0.0.0
    - --metrics-server.port=6060
    - --parent-chain.connection.url=https://eth-mainnet
    - --chain.id=42161
    - --init.url=https://snapshot.arbitrum.foundation/arb1/nitro-pruned.tar
    - --init.download-path=/database/snapshot.tar
    - --rpc.max-batch-response-size=200000000

To Reproduce
Steps to reproduce the behavior:

  1. Boot node from scratch using snapshot

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

@NicolasWent
Copy link

Hello,

I have the exact same issue, did you found a solution?

Using reth and lighthouse as node clients

@miki-bgd-011
Copy link

miki-bgd-011 commented Apr 13, 2024

I too have the same issue!

Prysm 5.0.3 + geth version 1.13.14-stable-2bd6bd01

@NicolasWent
Copy link

NicolasWent commented Apr 13, 2024

Are you guys using offchainlabs/nitro-node:v2.3.3-6a1c1a7 ?

Because I was using offchainlabs/nitro-node:v2.3.2-064fa11 but when I switched to the latest one: offchainlabs/nitro-node:v2.3.3-6a1c1a7, I don't see the error anymore.

I am not sure that my node is syncing correctly tho

EDIT: Actually the error is still there, it appeared after 1h of running the node

@miki-bgd-011
Copy link

I am getting the same error with v2.3.3-6a1c1a7

@nisdas
Copy link
Contributor

nisdas commented Apr 13, 2024

Hey guys, the reason its unable to sync is because the snapshot is older and the nitro node is requesting for already expired blobs. As a way to unblock your node you can try using an archival beacon rpc provider. They will be able to momentarily provide the blobs:
https://docs.arbitrum.io/node-running/reference/ethereum-beacon-rpc-providers#list-of-ethereum-beacon-chain-rpc-providers

@miki-bgd-011
Copy link

Hey guys, the reason its unable to sync is because the snapshot is older and the nitro node is requesting for already expired blobs. As a way to unblock your node you can try using an archival beacon rpc provider. They will be able to momentarily provide the blobs: https://docs.arbitrum.io/node-running/reference/ethereum-beacon-rpc-providers#list-of-ethereum-beacon-chain-rpc-providers

This did not work for me.

@ZYS980327
Copy link

@hdiass ,hi,do you have a better solution?

@ZYS980327
Copy link

@hdiass I am the same client and beacon chain as you are

@ZYS980327
Copy link

@hdiass @nisdas There seems to be a problem with the slot 8702476

@ZYS980327
Copy link

INFO [04-15|01:57:24.481] Loaded most recent local block number=193,592,599 hash=c758a4..c8df38 td=171,384,783 age=3w21h46m
WARN [04-15|01:57:24.498] Head state missing, repairing number=193,592,599 hash=c758a4..c8df38 snaproot=f8707d..e937dd
INFO [04-15|01:57:27.743] Loaded most recent local header number=193,592,599 hash=c758a4..c8df38 td=171,384,783 age=3w21h46m
INFO [04-15|01:57:27.743] Loaded most recent local block number=193,592,472 hash=0f9973..992472 td=171,384,656 age=3w21h46m
INFO [04-15|01:57:27.743] Loaded most recent local snap block number=193,592,599 hash=c758a4..c8df38 td=171,384,783 age=3w21h46m
WARN [04-15|01:57:27.763] Enabling snapshot recovery chainhead=193,592,472 diskbase=193,592,472
INFO [04-15|01:57:27.764] loaded genesis block from database number=22,207,817 hash=7d237d..c07986
INFO [04-15|01:57:27.764] Initialized transaction indexer limit=0
INFO [04-15|01:57:27.879] Using leveldb as the backing database
INFO [04-15|01:57:27.879] Allocated cache and file handles database=/home/user/.arbitrum/arb1/nitro/arbitrumdata cache=16.00MiB handles=16
INFO [04-15|01:57:28.144] Using LevelDB as the backing database
INFO [04-15|01:57:28.178] Using leveldb as the backing database
INFO [04-15|01:57:28.178] Allocated cache and file handles database=/home/user/.arbitrum/arb1/nitro/classic-msg cache=16.00MiB handles=16 readonly=true
INFO [04-15|01:57:28.179] Using LevelDB as the backing database
INFO [04-15|01:57:28.184] running as validator txSender= actingAsWallet=nil whitelisted=false strategy=Watchtower
INFO [04-15|01:57:28.191] Starting peer-to-peer node instance=nitro/v2.3.2-064fa11/linux-amd64/go1.20.14
WARN [04-15|01:57:28.191] P2P server will be useless, neither dialing nor listening
INFO [04-15|01:57:28.213] HTTP server started endpoint=[::]:8547 auth=false prefix= cors=* vhosts=*
INFO [04-15|01:57:28.213] New local node record seq=1,713,146,248,213 id=74c642d3240caa0c ip=127.0.0.1 udp=0 tcp=0
INFO [04-15|01:57:28.213] Started P2P networking self=enode://915a1959b8fdfe5e8f5be2e9a11c3590171a336eea01b42d07bae0e964d4de3b8caee4a9da733afcfebf67cec648a124d3a8b8cddf6cd9043c1728a682db6e33@127.0.0.1:0
INFO [04-15|01:57:28.238] rpc response method=eth_call logId=13 err="execution reverted" result=""0x"" attempt=0 args="[{"from":"0x0000000000000000000000000000000000000000","input":"0xf63a434a0000000000000000000000000000000000000000000000000000000000000000","to":"0x5ef0d09d1e6204141b4d37530808ed19f60fba35"}, "latest"]" errorData=null
INFO [04-15|01:57:28.272] validation not set up err="timeout trying to connect lastError: dial tcp :80: connect: connection refused"
INFO [04-15|01:57:28.334] latest assertion not yet in our node staker=0x0000000000000000000000000000000000000000 assertion=13188 state="{BlockHash:0x5e034aa3599073080d5d98fb72120cc7ac57f4364bdf28538d9a7873658f13b5 SendRoot:0x86f266ca2d5b0372d813e9ac8f4de941f2d6a072fc3817d5eb70440a4a881889 Batch:587258 PosInBatch:0}"
INFO [04-15|01:57:28.338] catching up to chain batches localBatches=582,290 target=587,258
WARN [04-15|01:57:28.508] error reading inbox err="failed to get blobs: expected at least 6 blobs for slot 8702476 but only got 0"
WARN [04-15|01:57:29.560] error reading inbox err="failed to get blobs: expected at least 6 blobs for slot 8702476 but only got 0"
INFO [04-15|01:57:29.961] created block l2Block=193,592,473 l2BlockHash=329fa3..ab34db
WARN [04-15|01:57:30.617] error reading inbox err="failed to get blobs: expected at least 6 blobs for slot 8702476 but only got 0"
INFO [04-15|01:57:30.962] created block l2Block=193,592,474 l2BlockHash=30c661..e46047
WARN [04-15|01:57:31.671] error reading inbox err="failed to get blobs: expected at least 6 blobs for slot 8702476 but only got 0"
INFO [04-15|01:57:31.963] created block l2Block=193,592,475 l2BlockHash=f6a06a..473b26
WARN [04-15|01:57:32.724] error reading inbox err="failed to get blobs: expected at least 6 blobs for slot 8702476 but only got 0"
INFO [04-15|01:57:32.964] created block l2Block=193,592,476 l2BlockHash=60676b..b8a284
WARN [04-15|01:57:33.779] error reading inbox err="failed to get blobs: expected at least 6 blobs for slot 8702476 but only got 0"
INFO [04-15|01:57:34.776] created block l2Block=193,592,477 l2BlockHash=364f72..6b6815
WARN [04-15|01:57:34.841] error reading inbox err="failed to get blobs: expected at least 6 blobs for slot 8702476 but only got 0"
INFO [04-15|01:57:35.777] created block l2Block=193,592,478 l2BlockHash=2d5615..634fcc
WARN [04-15|01:57:35.896] error reading inbox err="failed to get blobs: expected at least 6 blobs for slot 8702476 but only got 0"
INFO [04-15|01:57:36.777] created block l2Block=193,592,479 l2BlockHash=b24185..59a02c
WARN [04-15|01:57:36.960] error reading inbox err="failed to get blobs: expected at least 6 blobs for slot 8702476 but only got 0"

@ZYS980327
Copy link

@nisdas @hdiass The --parent-chain.blob-client.beacon-url flag in my command was replaced by a local prysmrpc quicknode's Ethereum beacon rpc looks like it's synced a bit, and there's no problem with the start to initialize snapshot to sync。 INFO [04-15|02:14:46.197] Unindexing transactions blocks=19,587,000 txs=22,043,742 total=67,362,073 elapsed=6m1.193s
INFO [04-15|02:14:46.730] created block l2Block=193,600,621 l2BlockHash=391b29.. 83c418
INFO [04-15|02:14:47.153] latest assertion not yet in our node staker=0x0000000000000000000000000000000000000000 assertion=13189 state="{BlockHash: 0x1c36f86ccde2f6c2c07abfd1a6d1b77e4c66bea1e6cc5e3a56bc662b8d4db456 SendRoot:0x2dd0cab6836e0433d8dd581770f08487fd79f83c4bd59314aaf956abb0e0d74d Batch:587272 PosInBatch:758}"
INFO [04-15|02:14:47.290] catching up to chain batches localBatches=582,609 target=587,273

@ZYS980327
Copy link

@nisdas ,hi,
But I still want to use a local beacon RPC, how do I modify my prysm

@nisdas
Copy link
Contributor

nisdas commented Apr 15, 2024

@ZYS980327 Hey, after the arbitrum node is synced you can use your local prysm node. You only need the archival blobs if the snapshot is old

@nisdas
Copy link
Contributor

nisdas commented Apr 15, 2024

@miki-bgd-011 Do you have any specific logs for this ?

@ZYS980327
Copy link

@nisdas INFO [04-15|04:08:48.545] latest assertion not yet in our node staker=0x0000000000000000000000000000000000000000 assertion=13191 state="{BlockHash:0x603c342a38a945b720e14601c6ae1a65257766ae93fe2aef4dd21e26b1ba77a2 SendRoot:0xdc2194034507b095e60dcffacb8a5e1d7713b2919f94e30de12b4233d8956554 Batch:587299 PosInBatch:0}"
INFO [04-15|04:08:49.143] created block l2Block=193,859,337 l2BlockHash=5dfeec..8060e2
INFO [04-15|04:08:50.143] created block l2Block=193,859,399 l2BlockHash=046673..93cce4
INFO [04-15|04:08:50.944] catching up to chain batches localBatches=584,280 target=587,299
INFO [04-15|04:08:51.146] created block l2Block=193,859,463 l2BlockHash=060a19..4d0bfb
INFO [04-15|04:08:52.147] created block l2Block=193,859,466 l2BlockHash=6fded8..03cc3d
INFO [04-15|04:08:53.148] created block l2Block=193,859,501 l2BlockHash=07db3b..87af59
INFO [04-15|04:08:54.148] created block l2Block=193,859,552 l2BlockHash=801eec..0e4b55
INFO [04-15|04:08:55.149] created block l2Block=193,859,587 l2BlockHash=c4bec7..ad4eea
INFO [04-15|04:08:56.157] created block l2Block=193,859,628 l2BlockHash=bfa06f..f110cd
INFO [04-15|04:08:57.157] created block l2Block=193,859,690 l2BlockHash=ee2ee3..57d9b2
INFO [04-15|04:08:58.157] created block l2Block=193,859,757 l2BlockHash=fe04a7..dc46c4
INFO [04-15|04:08:59.158] created block l2Block=193,859,830 l2BlockHash=5f6a0e..a86d3a
INFO [04-15|04:09:00.158] created block l2Block=193,859,862 l2BlockHash=c679fb..3ada00
INFO [04-15|04:09:01.159] created block l2Block=193,859,918 l2BlockHash=bb36e7..f739a9
INFO [04-15|04:09:02.159] created block l2Block=193,859,983 l2BlockHash=04ee8d..5c5806
INFO [04-15|04:09:03.160] created block l2Block=193,860,049 l2BlockHash=ec4ef2..0bf0df
INFO [04-15|04:09:04.160] created block l2Block=193,860,090 l2BlockHash=b4da43..443822
INFO [04-15|04:09:05.160] created block l2Block=193,860,143 l2BlockHash=6dafd6..8844b1
INFO [04-15|04:09:06.162] created block l2Block=193,860,181 l2BlockHash=cd7b7f..8e42d0
INFO [04-15|04:09:07.162] created block l2Block=193,860,251 l2BlockHash=dc33db..3cd5b4
INFO [04-15|04:09:08.163] created block l2Block=193,860,301 l2BlockHash=6a6531..d88671
INFO [04-15|04:09:09.164] created block l2Block=193,860,358 l2BlockHash=6036b6..994567
INFO [04-15|04:09:10.164] created block l2Block=193,860,422 l2BlockHash=25cee4..9a9e32
INFO [04-15|04:09:11.165] created block l2Block=193,860,488 l2BlockHash=6cd25b..f03507
INFO [04-15|04:09:12.165] created block l2Block=193,860,557 l2BlockHash=2f1f69..d0b812
INFO [04-15|04:09:13.166] created block l2Block=193,860,597 l2BlockHash=deadf0..0583e5
INFO [04-15|04:09:14.166] created block l2Block=193,860,660 l2BlockHash=8bac0c..58eb9f
INFO [04-15|04:09:15.167] created block l2Block=193,860,727 l2BlockHash=c76522..198495
INFO [04-15|04:09:16.168] created block l2Block=193,860,784 l2BlockHash=6e38a9..c00150
INFO [04-15|04:09:17.170] created block l2Block=193,860,852 l2BlockHash=a7d1e7..f8e4e4
INFO [04-15|04:09:18.170] created block l2Block=193,860,893 l2BlockHash=f6ca67..990ebb
INFO [04-15|04:09:19.171] created block l2Block=193,860,959 l2BlockHash=1f23a0..5e2fc5
INFO [04-15|04:09:20.171] created block l2Block=193,861,039 l2BlockHash=4bf55c..54e8b6
INFO [04-15|04:09:21.171] created block l2Block=193,861,103 l2BlockHash=02fffe..f3ae48
INFO [04-15|04:09:22.171] created block l2Block=193,861,170 l2BlockHash=d3f6d4..6fb5bd
INFO [04-15|04:09:23.174] created block l2Block=193,861,219 l2BlockHash=0d59e5..b4250b
INFO [04-15|04:09:24.175] created block l2Block=193,861,288 l2BlockHash=7d3986..0fa432
INFO [04-15|04:09:25.176] created block l2Block=193,861,362 l2BlockHash=d42c31..f6d7de
INFO [04-15|04:09:26.176] created block l2Block=193,861,431 l2BlockHash=47d00e..184dbe
INFO [04-15|04:09:27.177] created block l2Block=193,861,510 l2BlockHash=417799..2c24b5
INFO [04-15|04:09:28.177] created block l2Block=193,861,591 l2BlockHash=103965..de5158
INFO [04-15|04:09:29.178] created block l2Block=193,861,629 l2BlockHash=2c2372..3510aa
INFO [04-15|04:09:30.179] created block l2Block=193,861,690 l2BlockHash=a35411..726f8a
INFO [04-15|04:09:31.180] created block l2Block=193,861,732 l2BlockHash=b9e16c..0bcfbe
INFO [04-15|04:09:32.181] created block l2Block=193,861,820 l2BlockHash=a3938e..7d908d
INFO [04-15|04:09:33.182] created block l2Block=193,861,889 l2BlockHash=e8597e..c0985e
INFO [04-15|04:09:34.182] created block l2Block=193,861,963 l2BlockHash=eef03e..5d6836
INFO [04-15|04:09:35.183] created block l2Block=193,862,034 l2BlockHash=644601..2978fa
WARN [04-15|04:09:35.782] error reading inbox err="failed to get blobs: error calling beacon client in blobSidecars: unexpected end of JSON input"

@ZYS980327
Copy link

@nisdas But I don't know how to restart, I can only add --init-url and resync from the snapshot

1 similar comment
@ZYS980327
Copy link

@nisdas But I don't know how to restart, I can only add --init-url and resync from the snapshot

@ZYS980327
Copy link

@nisdas And after starting the arbitrum node, the connection to port 8547 to get the block data is denied

@nisdas
Copy link
Contributor

nisdas commented Apr 15, 2024

You would just need to replace this flag --parent-chain.blob-client.beacon-url @ZYS980327

@ZYS980327
Copy link

@nisdas docker run -d --privileged --rm -it -v /usr/local/nitro-snap/:/usr/local/nitro-snap/ -p 0.0.0.0:8547:8547 -p 0.0.0.0:8548:8548 offchainlabs/nitro-node:v2.3.2-064fa11 --parent-chain.connection.url http://10.150.20.11:8545 --chain.id=42161 --parent-chain.blob-client.beacon-url=http://10.150.20.11:3500 --http.api=net,web3,eth,arb,debug --http.corsdomain=* --http.addr=0.0.0.0 --http.vhosts=* --init.url="file:///usr/local/nitro-snap/nitro-pruned.tar" --init.download-path=/usr/local/nitro-snap/snapshot-8547.tar​
After switching to the local one, the snapshot is still synchronized from the original one, and port 8547 refuses to connect, and the blob still cannot be obtained ,I don't know if --init-download is useful, but there is no similar file generation until now

@nisdas
Copy link
Contributor

nisdas commented Apr 15, 2024

Is the prysm node running ? what are your prysm and arbitrum logs

@ZYS980327
Copy link

image

@ZYS980327
Copy link

prysm

@ZYS980327
Copy link

image

@ZYS980327
Copy link

@nisdas Always 8702476

@nisdas
Copy link
Contributor

nisdas commented Apr 15, 2024

Does your prysm node have peers ? it appears to be stuck for some reason

@ZYS980327
Copy link

yes,
image

@ZYS980327
Copy link

@nisdas Prysm is normal

@nisdas
Copy link
Contributor

nisdas commented Apr 15, 2024

@ZYS980327 This particular slot was 3 weeks old:
https://beaconcha.in/slot/8702746

Just to confirm, you are just restarting the nitro node and replacing it with the local beacon rpc url ?

@ZYS980327
Copy link

@nisdas Yes, and --init-download-path

@ZYS980327
Copy link

@nisdas Okay, thanks for the answer, and about how often the official snapshot will be uploaded?

@limitrinno
Copy link

  • Latest Prysm version is v5.0.3.
  • geth version 1.13.14-stable-2bd6bd01
    I'm encountering the same issue as you. Still working on resolving it.
    I have a new synchronization that keeps throwing the same error as yours, but the other one that is running the upgrade smoothly has no issues at all.

@ZYS980327
Copy link

@limitrinno Can you take a look at your execution commands, and the configured geth and prysm startup commands or environments

@hdiass
Copy link
Author

hdiass commented Apr 15, 2024

This doesnt work with chainstack, so the instructions in that URL for beacon chain providers is wrong.
Its bad that this requirements are not clearly documented so that we can use entirely own rpc's.

@hdiass
Copy link
Author

hdiass commented Apr 15, 2024

The only way i found to sync this was getting an updated snapshot.

@limitrinno
Copy link

The only way i found to sync this was getting an updated snapshot.

I just downloaded the snapshot today, but I'm still encountering the same error. I'm currently trying with Ankr's API.

@limitrinno
Copy link

@limitrinno Can you take a look at your execution commands, and the configured geth and prysm startup commands or environments

I'm currently using the latest version of the Arb Docker container on Ubuntu 20. Both sets of environment parameters and configurations are the same. The only difference is that the one experiencing issues was started synchronizing recently.

@kaber2
Copy link

kaber2 commented Apr 15, 2024

I fully backfilled blob data in prysm:

prysm[331287]: time="2024-04-15 14:12:56" level=info msg="Backfill batches processed" batchesRemaining=12 importable=2 imported=2 prefix=backfill
prysm[331287]: time="2024-04-15 14:12:57" level=info msg="Backfill batches processed" batchesRemaining=11 importable=1 imported=1 prefix=backfill
prysm[331287]: time="2024-04-15 14:12:58" level=info msg="Backfill batches processed" batchesRemaining=11 importable=0 imported=0 prefix=backfill
prysm[331287]: time="2024-04-15 14:12:58" level=info msg="Backfill batches processed" batchesRemaining=9 importable=2 imported=2 prefix=backfill
prysm[331287]: time="2024-04-15 14:13:00" level=info msg="Backfill batches processed" batchesRemaining=8 importable=1 imported=1 prefix=backfill
prysm[331287]: time="2024-04-15 14:13:00" level=info msg="Backfill batches processed" batchesRemaining=7 importable=1 imported=1 prefix=backfill
prysm[331287]: time="2024-04-15 14:13:00" level=info msg="Backfill batches processed" batchesRemaining=7 importable=0 imported=0 prefix=backfill
prysm[331287]: time="2024-04-15 14:13:01" level=info msg="Backfill batches processed" batchesRemaining=5 importable=2 imported=2 prefix=backfill
prysm[331287]: time="2024-04-15 14:13:01" level=info msg="Backfill batches processed" batchesRemaining=5 importable=0 imported=0 prefix=backfill
prysm[331287]: time="2024-04-15 14:13:03" level=info msg="Backfill batches processed" batchesRemaining=0 importable=1 imported=1 prefix=backfill

Still receiving the same error for slot 8702476. Any suggestions besides using an external RPC?

@kaber2
Copy link

kaber2 commented Apr 15, 2024

Tried using an external RPC node:

--parent-chain.blob-client.beacon-url=https://ethereum-mainnet.core.chainstack.com/beacon/xxxxxxxxxxxxxxxxxxxxxxx

Still getting the same error.

@sbond14
Copy link

sbond14 commented Apr 15, 2024

Seems like this could very easily be fixed by the team uploading snapshots more frequently than every couple months...

Has no one found a solution??

@kaber2
Copy link

kaber2 commented Apr 15, 2024

Agreed.

Generally, I tend to think that this is an issue with the snapshot itself. On another node, my prysm blob directory has exactly the same entries as on this new node, and I synced succesfully from it just two or three weeks ago using the same nitro version.

@nisdas
Copy link
Contributor

nisdas commented Apr 16, 2024

@nisdas But it commands to resync from the snapshot, and after the replacement is done, it doesn't start from the block I stopped

So nitro will ignore the snapshot if there already is a database @ZYS980327 . Is it possible you are running from a new directory on each restart ?

@ZYS980327
Copy link

@nisdas The command is the same every time, and occasionally the port number may be changed docker run -d --privileged --rm -it -v /usr/local/nitro-snap/:/usr/local/nitro-snap/ -p 0.0.0.0:8557:8557 -p 0.0.0.0:8558:8558 offchainlabs/nitro-node: v2.3.2-064fa11 --parent-chain.connection.url http://10.150.20.11:8545 --chain.id=42161 --parent-chain.blob-client.beacon-url=http://10.150.20.11:3500 --http.api=net,web3, eth,arb,debug --http.corsdomain=* --http.addr=0.0.0.0 --http.vhosts=* --init.url="file:///nitro-pruned.tar" ...

@ZYS980327
Copy link

@nisdas On Nitro 2.3.3, the developer said that the content that did not get blobs was optimized, but the snapshot is still the same slot that was problematic three weeks ago, so 2.3.3 is useless

@nisdas
Copy link
Contributor

nisdas commented Apr 16, 2024

@ZYS980327 you cant run the image with rm it removes all the data including the nitro db on a restart. I suggest running this all again without that flag

@ZYS980327
Copy link

@nisdas ok
i will try this

@ZYS980327
Copy link

@nisdas I feel like the official snapshot should be updated soon

@kaber2
Copy link

kaber2 commented Apr 16, 2024

@nisdas I see that you opened a feature request with prysm. Just for my understanding, in what way does your requested feature differ from --enable-experimental-backfill=true? I was under the impression that this flag was introduced for exactly that purpose.

@kaber2
Copy link

kaber2 commented Apr 16, 2024

So I've tried Chainstack and Ankr:

-parent-chain.blob-client.beacon-url=https://rpc.ankr.com/premium-http/eth_beacon/{key hidden}

In both cases, it does not work:

WARN [04-16|12:21:30.241] error reading inbox                      err="failed to get blobs: error calling beacon client in blobSidecars: unexpected end of JSON input"

This is starting to get very frustating.

@limitrinno
Copy link

So I've tried Chainstack and Ankr:

-parent-chain.blob-client.beacon-url=https://rpc.ankr.com/premium-http/eth_beacon/{key hidden}

In both cases, it does not work:

WARN [04-16|12:21:30.241] error reading inbox                      err="failed to get blobs: error calling beacon client in blobSidecars: unexpected end of JSON input"

This is starting to get very frustating.

I've also tried different APIs. Now, I'm planning to move the data from the currently running healthy node to a new node. I'm pruning it as it's too large, about a terabyte. I think directly transferring it over should be the best solution at the moment.

@mmikolajczykm
Copy link

Any update on this issue? It's almost one week since I can't synchronize one of my nodes after a cleanup.
As parent services I'm using geth + teku. I've tried few nodes providers with the same result.

@sbond14
Copy link

sbond14 commented Apr 17, 2024

Any update on this issue? It's almost one week since I can't synchronize one of my nodes after a cleanup. As parent services I'm using geth + teku. I've tried few nodes providers with the same result.

Nope... still unable to sync either of my nodes after countless attempts on both machines. If only the foundation would just release a new snapshot...

@cawabunga
Copy link

cawabunga commented Apr 17, 2024

I've just set up a server, and have the same issue. I use Lighthouse + Nethermind. I can confirm that issue cannot be resolved by using Chainstack archive node.

upd: I tried Ankr, and it worked the node started to sync. It costs $10 ($10 is minimal amount allowed to top up). But I face random long sync freezes, I suppose Ankr just rate-limiting me, however their site says there were only 600 requests last hour.

@NicolasWent
Copy link

Hello,

I used quicknode free tier and was able to sync my node in around 24h-48h.
I did the following:

  • Completely removed the docker files
  • Re-run using quicknode as the beacon layer and my reth archive node as an execution layer

And it worked for me, in around 24h-48h I was able to replace the quicknode beacon layer by my own beacon layer and it is running good at latest blocks.

@cawabunga
Copy link

cawabunga commented Apr 19, 2024

I managed to sync using Ankr. It took 24 hours, in case somebody needs archive beacon node you can use mine since I have credits there more than I needed:

https://rpc.ankr.com/premium-http/eth_beacon/4fb934a442d36b0b08ba76c2f9c86290b67f2c516b19c6e5cfcd3e57d9af5169

upd: Sorry, I had to disable the node, because I am out of credits.

@mmikolajczykm
Copy link

There is new snapshot available at https://snapshot.arbitrum.foundation/index.html that solves the issue (at least for now).

@zhy827827
Copy link

seem issues:

image=offchainlabs/nitro-node:v2.3.3-6a1c1a7

docker run -d -p 0.0.0.0:8547:8547 -p 0.0.0.0:8548:8548 -v /opt/arbitrum-nitro/:/home/user/.arbitrum $image --parent-chain.connection.url http://192.1.1.10:8545 --parent-chain.blob-client.beacon-url http://192.1.1.20:3500 --chain.id=42161 --http.api=net,web3,eth,debug,arb,txpool --http.corsdomain=* --http.addr=0.0.0.0 --http.vhosts=* --execution.caching.archive --ws.addr=0.0.0.0 --ws.api=net,web3,eth,arb,debug,txpool --ws.expose-all --ws.origins=* --execution.rpc.evm-timeout 30s --log-level 4 --init.url="https://snapshot.arbitrum.foundation/arb1/nitro-archive.tar"

when sync block hight 190281129 and it can't sync!!!

there have no error logs:

INFO [05-14|09:44:56.260] validation not set up                    err="timeout trying to connect lastError: dial tcp :80: connect: connection refused"
INFO [05-14|09:44:56.284] latest assertion not yet in our node     staker=0x0000000000000000000000000000000000000000 assertion=13889 state="{BlockHash:0xa92be14c0d5592493af6479a4c1fefbe7e8d891799f1348fe2da92a09a2096e6 SendRoot:0x499f9cdd6b06fee7eebdd72ec5c9bd2ad6906a05eadff56b818b1d431fcf65a4 Batch:596630 PosInBatch:0}"
INFO [05-14|09:44:56.318] catching up to chain batches             localBatches=579,468 target=596,630
DEBUG[05-14|09:44:58.102] Couldn't get external IP                 err="no UPnP or NAT-PMP router discovered" interface="UPnP or NAT-PMP"
DEBUG[05-14|09:45:55.812] Ancient blocks frozen already            number=190,281,129 hash=541646..1c5be2 frozen=190,191,130
INFO [05-14|09:45:56.520] latest assertion not yet in our node     staker=0x0000000000000000000000000000000000000000 assertion=13889 state="{BlockHash:0xa92be14c0d5592493af6479a4c1fefbe7e8d891799f1348fe2da92a09a2096e6 SendRoot:0x499f9cdd6b06fee7eebdd72ec5c9bd2ad6906a05eadff56b818b1d431fcf65a4 Batch:596630 PosInBatch:0}"
INFO [05-14|09:45:58.000] catching up to chain batches             localBatches=579,468 target=596,630
DEBUG[05-14|09:46:55.813] Ancient blocks frozen already            number=190,281,129 hash=541646..1c5be2 frozen=190,191,130
DEBUG[05-14|09:46:56.101] Couldn't get external IP                 err="no UPnP or NAT-PMP router discovered" interface="UPnP or NAT-PMP"
INFO [05-14|09:46:56.540] latest assertion not yet in our node     staker=0x0000000000000000000000000000000000000000 assertion=13889 state="{BlockHash:0xa92be14c0d5592493af6479a4c1fefbe7e8d891799f1348fe2da92a09a2096e6 SendRoot:0x499f9cdd6b06fee7eebdd72ec5c9bd2ad6906a05eadff56b818b1d431fcf65a4 Batch:596630 PosInBatch:0}"
INFO [05-14|09:46:58.860] catching up to chain batches             localBatches=579,468 target=596,630
DEBUG[05-14|09:47:40.663] Served eth_blockNumber                   conn=172.17.0.1:24846 reqid=1 duration="122.767µs"
DEBUG[05-14|09:47:41.702] Served eth_blockNumber                   conn=172.17.0.1:62008 reqid=1 duration="70.361µs"
DEBUG[05-14|09:47:42.508] Served eth_blockNumber                   conn=172.17.0.1:62014 reqid=1 duration="67.251µs"
DEBUG[05-14|09:47:43.139] Served eth_blockNumber                   conn=172.17.0.1:62028 reqid=1 duration="73.336µs"
DEBUG[05-14|09:47:43.711] Served eth_blockNumber                   conn=172.17.0.1:62034 reqid=1 duration="64.182µs"
DEBUG[05-14|09:47:44.298] Served eth_blockNumber                   conn=172.17.0.1:62040 reqid=1 duration="61.969µs"
DEBUG[05-14|09:47:55.813] Ancient blocks frozen already            number=190,281,129 hash=541646..1c5be2 frozen=190,191,130
INFO [05-14|09:47:56.573] latest assertion not yet in our node     staker=0x0000000000000000000000000000000000000000 assertion=13889 state="{BlockHash:0xa92be14c0d5592493af6479a4c1fefbe7e8d891799f1348fe2da92a09a2096e6 SendRoot:0x499f9cdd6b06fee7eebdd72ec5c9bd2ad6906a05eadff56b818b1d431fcf65a4 Batch:596630 PosInBatch:0}"
INFO [05-14|09:47:58.959] catching up to chain batches             localBatches=579,468 target=596,630
DEBUG[05-14|09:48:55.814] Ancient blocks frozen already            number=190,281,129 hash=541646..1c5be2 frozen=190,191,130
DEBUG[05-14|09:48:56.102] Couldn't get external IP                 err="no UPnP or NAT-PMP router discovered" interface="UPnP or NAT-PMP"
INFO [05-14|09:48:56.603] latest assertion not yet in our node     staker=0x0000000000000000000000000000000000000000 assertion=13889 state="{BlockHash:0xa92be14c0d5592493af6479a4c1fefbe7e8d891799f1348fe2da92a09a2096e6 SendRoot:0x499f9cdd6b06fee7eebdd72ec5c9bd2ad6906a05eadff56b818b1d431fcf65a4 Batch:596630 PosInBatch:0}"
INFO [05-14|09:48:59.023] catching up to chain batches             localBatches=579,468 target=596,630

DEBUG[05-14|09:49:55.815] Ancient blocks frozen already            number=190,281,129 hash=541646..1c5be2 frozen=190,191,130
INFO [05-14|09:49:56.637] latest assertion not yet in our node     staker=0x0000000000000000000000000000000000000000 assertion=13889 state="{BlockHash:0xa92be14c0d5592493af6479a4c1fefbe7e8d891799f1348fe2da92a09a2096e6 SendRoot:0x499f9cdd6b06fee7eebdd72ec5c9bd2ad6906a05eadff56b818b1d431fcf65a4 Batch:596630 PosInBatch:0}"
INFO [05-14|09:49:59.111] catching up to chain batches             localBatches=579,468 target=596,630

how to fix this issues?

@amaurer
Copy link

amaurer commented May 26, 2024

Switched beacon endpoint to Quicknode and started syncing. Will switch back to local (non-archive) node when it's closer to head.
--parent-chain.blob-client.beacon-url=....

@hdiass hdiass closed this as completed May 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests