Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running Docker results in 502 Bad Gateway #26

Closed
jimmysong opened this issue Dec 20, 2018 · 18 comments
Closed

Running Docker results in 502 Bad Gateway #26

jimmysong opened this issue Dec 20, 2018 · 18 comments

Comments

@jimmysong
Copy link

Running the docker command per the README.md. The result is a page with a 502 Bad Gateway error due to a missing endpoint in /api/blocks/:1 when going to the :8080 web page.

@greenaddress
Copy link
Collaborator

@jimmysong I tried to reproduce locally but I couldn't reproduce, going to the endpoint works for me - if you are using liquid or testnet you have to add before api the environment, for example for liquid http://localhost:8082/liquid/api/blocks/:1

If not, could you provide the steps you tried?

@jimmysong
Copy link
Author

I've tried both mainnet and testnet (ports 8080 and 8081 respectively. The api urls all give a 502 Bad Gateway from docker. Example:

http://localhost:8080/api/blocks/:1
http://localhost:8081/testnet/api/blocks/:1

Here's the logging data from the docker container. I have another one for mainnet esplora, which looks similar.

$ docker logs esplora-testnet
Enabled mode explorer
/usr/lib/python2.7/dist-packages/supervisor/options.py:298: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
  'Supervisord is running as root and it is searching '
2018-12-20 23:50:05,657 CRIT Supervisor running as root (no user in config file)
2018-12-20 23:50:05,657 INFO Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
2018-12-20 23:50:05,787 INFO RPC interface 'supervisor' initialized
2018-12-20 23:50:05,787 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2018-12-20 23:50:05,787 INFO supervisord started with pid 22
2018-12-20 23:50:06,791 INFO spawned: 'nginx' with pid 25
2018-12-20 23:50:06,824 INFO spawned: 'bitcoind' with pid 26
2018-12-20 23:50:06,875 INFO spawned: 'electrs' with pid 27
2018-12-20 23:50:07,877 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-12-20 23:50:07,878 INFO success: bitcoind entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-12-20 23:50:07,878 INFO success: electrs entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

@metacoin
Copy link

I'm experiencing the same issue. Followed the README but getting a 502 bad gateway on every endpoint.

@greenaddress
Copy link
Collaborator

@jimmysong How long have you waited for and what is the spec of the box? The docker images synchronizes from scratch and isn't particularly tuned to do so fast (for example no dbcache etc) - once bitcoin core synchronizes then electrs should kick in for a few hours and then open up the port and nginx should be able to respond correctly.

Can you tail debug.log from the data_dir bitcoin directory and electrs-liquid.log from the logs directory?

You should be able to do this either from outside docker or from within by doing

docker ps

  • will list the docker instances that are running

docker exec -it $HASH_OF_INSTANCE_FROM_PS bash

and then tail -f /data/bitcoin/debug.log for mainnet or /data/bitcoin/testnet3/debug.log for testnet

once you are inside the docker image there's also cli shell script for bitcoin-cli shorthand preparametrized (https://github.com/Blockstream/esplora/blob/master/cli.sh.in)

@jimmysong
Copy link
Author

I've had esplora running for over a week and it was synced in about 6-7 hours.

$ tail -f /data/bitcoin/debug.log
2018-12-29T17:27:00Z Pre-allocating up to position 0x700000 in rev01478.dat
2018-12-29T17:27:00Z UpdateTip: new best=0000000000000000000f2b83d324b241449149a4bae463cdee59e472ede7ead8 height=556097 version=0x2fffc000 log2_work=90.180254 tx=368644080 date='2018-12-29T17:26:33Z' progress=1.000000 cache=76.2MiB(390864txo) warning='30 of last 100 blocks have unexpected version'
2018-12-29T17:27:04Z Pre-allocating up to position 0x4000000 in blk01478.dat
2018-12-29T17:27:04Z UpdateTip: new best=000000000000000000004ff2ecb38c0c1d2c18c59a95623271eb5d810ca0ccd6 height=556098 version=0x20000000 log2_work=90.180277 tx=368645069 date='2018-12-29T17:27:00Z' progress=1.000000 cache=76.2MiB(390519txo) warning='30 of last 100 blocks have unexpected version'
2018-12-29T17:29:43Z UpdateTip: new best=0000000000000000002ab1810fd437b37b45d9c9e9cb873db19949b1802a910f height=556099 version=0x20000000 log2_work=90.180299 tx=368645557 date='2018-12-29T17:29:40Z' progress=1.000000 cache=76.2MiB(391293txo) warning='30 of last 100 blocks have unexpected version'
2018-12-29T17:52:45Z UpdateTip: new best=00000000000000000010c6cd24af4ec5aae61390e5163d87088f688968c9a310 height=556100 version=0x20000000 log2_work=90.180322 tx=368648645 date='2018-12-29T17:52:15Z' progress=1.000000 cache=77.2MiB(398811txo) warning='30 of last 100 blocks have unexpected version'
2018-12-29T17:59:18Z UpdateTip: new best=0000000000000000000b252745348022a5a4115e8f94c10f070b15cc90ca084d height=556101 version=0x20000000 log2_work=90.180344 tx=368651343 date='2018-12-29T17:58:48Z' progress=1.000000 cache=77.4MiB(400932txo) warning='29 of last 100 blocks have unexpected version'
2018-12-29T18:17:13Z UpdateTip: new best=0000000000000000000208b795cacb6489140b30b0e267dac7c966f72a62a28f height=556102 version=0x20000000 log2_work=90.180367 tx=368654146 date='2018-12-29T18:17:06Z' progress=1.000000 cache=78.5MiB(409729txo) warning='29 of last 100 blocks have unexpected version'
2018-12-29T18:21:04Z UpdateTip: new best=0000000000000000001dfc8a581ba97c31d9dd8478c8113bc8f48798a441b4d1 height=556103 version=0x20c00000 log2_work=90.18039 tx=368655570 date='2018-12-29T18:20:43Z' progress=1.000000 cache=78.4MiB(408814txo) warning='30 of last 100 blocks have unexpected version'
2018-12-29T18:42:53Z UpdateTip: new best=0000000000000000001f8b64fa3369f347766bfd15148b1b796f9d125a9dfd31 height=556104 version=0x20000000 log2_work=90.180412 tx=368658547 date='2018-12-29T18:42:23Z' progress=1.000000 cache=79.3MiB(416034txo) warning='29 of last 100 blocks have unexpected version'
$ tail -f /data/electrs_mainnet_db/mainnet/mainnet/LOGS
root@c12604fa609d:/data/electrs_bitcoin_db/mainnet/mainnet# tail -f LOG
2018/12/20-23:37:44.587916 7fcc4d7ff700 (Original Log Time 2018/12/20-23:37:44.587862) [rocksdb/db/db_impl_compaction_flush.cc:1781] Compaction nothing to do
2018/12/20-23:37:44.625911 7fcc4cffe700 (Original Log Time 2018/12/20-23:37:44.625875) [rocksdb/db/db_impl_compaction_flush.cc:1424] Calling FlushMemTableToOutputFile with column family
[default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
2018/12/20-23:37:44.625924 7fcc4cffe700 [rocksdb/db/flush_job.cc:301] [default] [JOB 3278] Flushing memtable with next log file: 173
2018/12/20-23:37:44.625964 7fcc4cffe700 EVENT_LOG_v1 {"time_micros": 1545349064625953, "job": 3278, "event": "flush_started", "num_memtables": 1, "num_entries": 1730330, "num_deletes": 0, "memory_usage": 261654424, "flush_reason": "Write Buffer Full"}
2018/12/20-23:37:44.625974 7fcc4cffe700 [rocksdb/db/flush_job.cc:331] [default] [JOB 3278] Level-0 flush table #1267: started
2018/12/20-23:37:55.163622 7fcc4cffe700 EVENT_LOG_v1 {"time_micros": 1545349075163574, "cf_name": "default", "job": 3278, "event": "table_file_creation", "file_number": 1267, "file_size": 204209540, "table_properties": {"data_size": 203484122, "index_size": 1054167, "filter_size": 0, "raw_key_size": 52120040, "raw_average_key_size": 30, "raw_value_size": 171836958, "raw_average_value_size": 99, "num_data_blocks": 42157, "num_entries": 1724952, "filter_policy_name": "", "kDeletedKeys": "0", "kMergeOperands": "0"}}
2018/12/20-23:37:55.163678 7fcc4cffe700 [rocksdb/db/flush_job.cc:371] [default] [JOB 3278] Level-0 flush table #1267: 204209540 bytes OK
2018/12/20-23:37:55.356686 7fcc18fff700 [rocksdb/db/db_impl_write.cc:1373] [default] New memtable created with log file: #173. Immutable memtables: 1.
2018/12/20-23:37:55.356719 7fcc18fff700 [WARN] [rocksdb/db/column_family.cc:743] [default] Stopping writes because we have 2 immutable memtables (waiting for flush), max_write_buffer_number is set to 2
2018/12/20-23:37:55.356809 7fcc4d7ff700 (Original Log Time 2018/12/20-23:37:55.356771) [rocksdb/db/db_impl_compaction_flush.cc:1781] Compaction nothing to do
$ tail -f /data/logs/electrs-bitcoin.log
thread 'bulk_index' panicked at 'failed to send indexed rows: "SendError(..)"', libcore/result.rs:1009:5
thread 'bulk_index' panicked at 'failed to send indexed rows: "SendError(..)"', libcore/result.rs:1009:5
thread 'main' panicked at 'writer panicked: Any', libcore/result.rs:1009:5
Config { log: StdErrLog { verbosity: Error, quiet: false, timestamp: Millisecond, modules: [], writer: "stderr", color_choice: Auto }, network_type: bitcoin, db_path: "/data/electrs_bitcoin_db/mainnet/mainnet", daemon_dir: "/data/bitcoin", daemon_rpc_addr: V4(127.0.0.1:8332), cookie: None, electrum_rpc_addr: V4(127.0.0.1:50001), http_addr: V4(127.0.0.1:3000), monitoring_addr: V4(0.0.0.0:4224), jsonrpc_import: false, index_batch_size: 100, bulk_index_threads: 16, tx_cache_size: 10000, extended_db_enabled: true, prevout_enabled: true }
Config { log: StdErrLog { verbosity: Error, quiet: false, timestamp: Millisecond, modules: [], writer: "stderr", color_choice: Auto }, network_type: Config { log: StdErrLog { verbosity: Error, quiet: false, timestamp: Millisecond, modules: [], writer: "stderr", color_choice: Auto }, network_type: bitcoin, db_path: testnet, db_path: "/data/electrs_bitcoin_db/testnet/testnet", daemon_dir: "/data/bitcoin/testnet3", daemon_rpc_addr: "/data/electrs_bitcoin_db/mainnet/mainnet", daemon_dir: "/data/bitcoin", daemon_rpc_addr: V4(127.0.0.1:8332), cookie: None, electrum_rpc_addr: V4(127.V4(127.0.0.1:18332), cookie: None, electrum_rpc_addr: V4(127.0.0.1:50001), http_addr: V4(127.0.0.1:3000), monitoring_addr: V4(0.0.0.0:4224), jsonrpc_import: false, index_batch_size: 100, bulk_index_threads: 16, tx_cache_size: 10000, extended_db_enabled: true, prevout_enabled: true }
0.0.1:60001), http_addr: V4(127.0.0.1:3000), monitoring_addr: V4(0.0.0.0:4224), jsonrpc_import: false, index_batch_size: 100, bulk_index_threads: 16, tx_cache_size: 10000, extended_db_enabled: true, prevout_enabled: true }
Config { log: StdErrLog { verbosity: Error, quiet: false, timestamp: Millisecond, modules: [], writer: "stderr", color_choice: Auto }, network_type: bitcoin, db_path: "/data/electrs_bitcoin_db/mainnet/mainnet", daemon_dir: "/data/bitcoin", daemon_rpc_addr: V4(127.0.0.1:8332), cookie: None, electrum_rpc_addr: V4(127.0.0.1:50001), http_addr: V4(127.0.0.1:3000), monitoring_addr: V4(0.0.0.0:4224), jsonrpc_import: false, index_batch_size: 100, bulk_index_threads: 16, tx_cache_size: 10000, extended_db_enabled: true, prevout_enabled: true }
Config { log: StdErrLogConfig { log: StdErrLog { verbosity: Error, quiet: false, timestamp: Millisecond, modules: [ { verbosity: Error, quiet: false, timestamp: Millisecond, modules: [], writer: "stderr", color_choice: Auto }, network_type: testnet, db_path: "/data/electrs_bitcoin_db/testnet/testnet", daemon_dir: "/data/bitcoin/testnet3", daemon_rpc_addr: ], writer: "stderr", color_choice: Auto }, network_type: bitcoin, db_path: "/data/electrs_bitcoin_db/mainnet/mainnet", daemon_dir: "/data/bitcoin", daemon_rpc_addr: V4(127.0.0.1:18332), cookie: None, electrum_rpc_addr: V4(127.0.0.1:60001), http_addr: V4(127.0.0.1:3000), monitoring_addr: V4(0.0.0.0:4224), jsonrpc_import: false, index_batch_size: 100, bulk_index_threads: 16, tx_cache_size: 10000, extended_db_enabled: true, prevout_enabled: true }
V4(127.0.0.1:8332), cookie: None, electrum_rpc_addr: V4(127.0.0.1:50001), http_addr: V4(127.0.0.1:3000), monitoring_addr: V4(0.0.0.0:4224), jsonrpc_import: false, index_batch_size: 100, bulk_index_threads: 16, tx_cache_size: 10000, extended_db_enabled: true, prevout_enabled: true }
Config { log: StdErrLog { verbosity: Error, quiet: false, timestamp: Millisecond, modules: [], writer: "stderr", color_choice: Auto }, network_type: bitcoin, db_path: "/data/electrs_bitcoin_db/mainnet/mainnet", daemon_dir: "/data/bitcoin", daemon_rpc_addr: V4(127.0.0.1:8332), cookie: None, electrum_rpc_addr: V4(127.0.0.1:50001), http_addr: V4(127.0.0.1:3000), monitoring_addr: V4(0.0.0.0:4224), jsonrpc_import: false, index_batch_size: 100, bulk_index_threads: 16, tx_cache_size: 10000, extended_db_enabled: true, prevout_enabled: true }

@jimmysong
Copy link
Author

ok, a little update. It might be because I'm running esplora and esplora-testnet on the same server. testnet is now working fine, but esplora still has the same error. I think something is wrong with the monitoring port 4224. Maybe they conflict somehow?

@greenaddress
Copy link
Collaborator

@jimmysong as far as I know docker shouldn't conflict if you use the same ports as long as you don't export them with -p - in any case from the logs you provided it seems that electrs had some issues, not sure why.

To increase logging verbosity you may want to try to add -vvv (say before --timestamp) to

contrib/supervisord.conf.in:command=/srv/explorer/electrs_{DAEMON}/bin/electrs --timestamp --http-addr 127.0.0.1:3000 --network {ELECTRS_NETWORK} {PARENT_NETWORK} --daemon-dir /data/{DAEMON} --monitoring-addr 0.0.0.0:4224 --db-dir /data/electrs_{DAEMON}_db/{NETWORK}

and then rebuild the docker images. Alternatively you can try stopping, deleting electrs_mainnet_db and restart the instance

@metacoin
Copy link

metacoin commented Dec 31, 2018

Hi, I fixed this problem on my end, after hunting through logs the issue was not enough HDD space on my device.

2018/12/30-15:13:45.326365 7f3dff1fe700 [ERROR] [rocksdb/db/db_impl_compaction_flush.cc:1463] Waiting after background flush error: IO error: No space left on deviceWhile appending to file: /data/electrs_bitcoin_db/mainnet/mainnet/001433.sst: No space left on deviceAccumulated background error counts: 2
2018/12/30-15:13:46.439186 7f3d5bdfc700 [rocksdb/db/db_impl.cc:398] Shutdown complete

Increasing HDD capacity and deleting the database directory then re-syncing worked.

@jimmysong
Copy link
Author

Looks like the blocks weren't synced or something, my issue has gone away as well. Running both on the same docker server.

@greenaddress
Copy link
Collaborator

@jimmysong thanks! closing

@RubenWaterman
Copy link

I'm running in the same issue and it looks like that paths of log files have changed since this was first posted? bitcoind seems to be all finished and working:

4-2019-08-19T11:17:30Z UpdateTip: new best=000000000000000000084df42d879fc44c6d108137d308724f52ed8381a0e930 height=590790 version=0x20000000 log2_work=90.981221 tx=446546473 date='2019-08-19T11:16:29Z' progress=0.999999 cache=17.2MiB(126881txo) warning='47 of last 100 blocks have unexpected version'
4-2019-08-19T11:20:18Z Pre-allocating up to position 0x7000000 in blk01762.dat
4-2019-08-19T11:20:18Z UpdateTip: new best=0000000000000000001703d95af77e589a2c17f477bb5fe163269d41a93cf93c height=590791 version=0x20c00000 log2_work=90.981246 tx=446548627 date='2019-08-19T11:20:01Z' progress=1.000000 cache=17.6MiB(129810txo) warning='47 of last 100 blocks have unexpected version'
4-2019-08-19T11:20:47Z UpdateTip: new best=0000000000000000000ad29fd629fa395cd8f13a14968b14c9035af6d7349292 height=590792 version=0x20000000 log2_work=90.981272 tx=446551064 date='2019-08-19T11:20:44Z' progress=1.000000 cache=17.9MiB(132495txo) warning='47 of last 100 blocks have unexpected version'

but when I look into this folder: /data/electrs_mainnet_db/mainnet/mainnet/, I just see a newindex folder, with inside the cache, history and txstore folders, which each contain some log files, which one should I be looking at?

and when I look into this folder: /data/logs/, I see the electrs, nginx, nodedaemon, prerenderer and tor folders, inside of the electrs, there's config, current and lock files. My current file has just two lines:

2-Config { log: StdErrLog { verbosity: Error, quiet: false, timestamp: Millisecond, modules: [], writer: "stderr", color_choice: Auto }, network_type: Bitcoin, db_path: "/data/electrs_bitcoin_db/mainnet/mainnet", daemon_dir: "/data/bitcoin", daemon_rpc_addr: V4(127.0.0.1:8332), cookie: None, electrum_rpc_addr: V4(127.0.0.1:50001), http_addr: V4(127.0.0.1:3000), monitoring_addr: V4(0.0.0.0:4224), jsonrpc_import: false, index_batch_size: 100, bulk_index_threads: 8, tx_cache_size: 10000, prevout_enabled: true, cors: None, precache_scripts: Some("/srv/explorer/popular-scripts.txt") }
2-Config { log: StdErrLog { verbosity: Error, quiet: false, timestamp: Millisecond, modules: [], writer: "stderr", color_choice: Auto }, network_type: Bitcoin, db_path: "/data/electrs_bitcoin_db/mainnet/mainnet", daemon_dir: "/data/bitcoin", daemon_rpc_addr: V4(127.0.0.1:8332), cookie: None, electrum_rpc_addr: V4(127.0.0.1:50001), http_addr: V4(127.0.0.1:3000), monitoring_addr: V4(0.0.0.0:4224), jsonrpc_import: false, index_batch_size: 100, bulk_index_threads: 8, tx_cache_size: 10000, prevout_enabled: true, cors: None, precache_scripts: Some("/srv/explorer/popular-scripts.txt") }

This has been running on my small server (dual core, 4gb ram) for more than a week and is still showing "Esplora is currently unavailable, please try again later.", I've also spun up another instance with 8 cores and 32gb ram, which has been running for 24 hours and is still showing the same. Do you have any idea how long this indexing takes? Also, do you think it's possible to copy the data directory from the fast server to the slow server (IF that one ever starts to work) and would it work? Or each instance needs to do their own indexing?

Any help would be much appreciated @greenaddress :)

@swayll
Copy link

swayll commented Feb 10, 2020

I'm running in the same issue and it looks like that paths of log files have changed since this was first posted? bitcoind seems to be all finished and working:

4-2019-08-19T11:17:30Z UpdateTip: new best=000000000000000000084df42d879fc44c6d108137d308724f52ed8381a0e930 height=590790 version=0x20000000 log2_work=90.981221 tx=446546473 date='2019-08-19T11:16:29Z' progress=0.999999 cache=17.2MiB(126881txo) warning='47 of last 100 blocks have unexpected version'
4-2019-08-19T11:20:18Z Pre-allocating up to position 0x7000000 in blk01762.dat
4-2019-08-19T11:20:18Z UpdateTip: new best=0000000000000000001703d95af77e589a2c17f477bb5fe163269d41a93cf93c height=590791 version=0x20c00000 log2_work=90.981246 tx=446548627 date='2019-08-19T11:20:01Z' progress=1.000000 cache=17.6MiB(129810txo) warning='47 of last 100 blocks have unexpected version'
4-2019-08-19T11:20:47Z UpdateTip: new best=0000000000000000000ad29fd629fa395cd8f13a14968b14c9035af6d7349292 height=590792 version=0x20000000 log2_work=90.981272 tx=446551064 date='2019-08-19T11:20:44Z' progress=1.000000 cache=17.9MiB(132495txo) warning='47 of last 100 blocks have unexpected version'

but when I look into this folder: /data/electrs_mainnet_db/mainnet/mainnet/, I just see a newindex folder, with inside the cache, history and txstore folders, which each contain some log files, which one should I be looking at?

and when I look into this folder: /data/logs/, I see the electrs, nginx, nodedaemon, prerenderer and tor folders, inside of the electrs, there's config, current and lock files. My current file has just two lines:

2-Config { log: StdErrLog { verbosity: Error, quiet: false, timestamp: Millisecond, modules: [], writer: "stderr", color_choice: Auto }, network_type: Bitcoin, db_path: "/data/electrs_bitcoin_db/mainnet/mainnet", daemon_dir: "/data/bitcoin", daemon_rpc_addr: V4(127.0.0.1:8332), cookie: None, electrum_rpc_addr: V4(127.0.0.1:50001), http_addr: V4(127.0.0.1:3000), monitoring_addr: V4(0.0.0.0:4224), jsonrpc_import: false, index_batch_size: 100, bulk_index_threads: 8, tx_cache_size: 10000, prevout_enabled: true, cors: None, precache_scripts: Some("/srv/explorer/popular-scripts.txt") }
2-Config { log: StdErrLog { verbosity: Error, quiet: false, timestamp: Millisecond, modules: [], writer: "stderr", color_choice: Auto }, network_type: Bitcoin, db_path: "/data/electrs_bitcoin_db/mainnet/mainnet", daemon_dir: "/data/bitcoin", daemon_rpc_addr: V4(127.0.0.1:8332), cookie: None, electrum_rpc_addr: V4(127.0.0.1:50001), http_addr: V4(127.0.0.1:3000), monitoring_addr: V4(0.0.0.0:4224), jsonrpc_import: false, index_batch_size: 100, bulk_index_threads: 8, tx_cache_size: 10000, prevout_enabled: true, cors: None, precache_scripts: Some("/srv/explorer/popular-scripts.txt") }

This has been running on my small server (dual core, 4gb ram) for more than a week and is still showing "Esplora is currently unavailable, please try again later.", I've also spun up another instance with 8 cores and 32gb ram, which has been running for 24 hours and is still showing the same. Do you have any idea how long this indexing takes? Also, do you think it's possible to copy the data directory from the fast server to the slow server (IF that one ever starts to work) and would it work? Or each instance needs to do their own indexing?

Any help would be much appreciated @greenaddress :)

Same question! I have the same error in log.
2-Config { log: StdErrLog { verbosity: Error, quiet: false, show_level: true, timestamp: Millisecond, modules: [], writer: "stderr", color_choice: Never }, network_type: Bitcoin, db_path: "/data/electrs_bitcoin_db/mainnet/mainnet", daemon_dir: "/data/bitcoin", daemon_rpc_addr: V4(127.0.0.1:8332), cookie: None, electrum_rpc_addr: V4(0.0.0.0:50001), http_addr: V4(127.0.0.1:3000), monitoring_addr: V4(0.0.0.0:4224), jsonrpc_import: false, index_batch_size: 100, bulk_index_threads: 4, tx_cache_size: 10000, prevout_enabled: true, cors: None, precache_scripts: Some("/srv/explorer/popular-scripts.txt"), electrum_txs_limit: 100000 }

@farukterzioglu
Copy link

I am having the same issue.
I had a running instance for months, I updated the docker image with "docker pull" then run again but it is still not available.

UI is saying "Esplora is currently unavailable, please try again later" and api returning "502 Bad Gateway", and also error logs is same as upper comment.

@shesek
Copy link
Collaborator

shesek commented Mar 22, 2020

My current file has just two lines

You can get electrs to print more verbose output by starting the docker run.sh script with the third parameter set to verbose, i.e. docker run ... bash -c "/srv/explorer/run.sh bitcoin-mainnet explorer verbose". With this set, it should be a lot easier to figure out what's going on.


One possible guess for why your servers aren't starting (or rather, taking a really long time to) is that they're still busy computing the cached stats for the "popular addresses", which can take a lot of time, especially on weaker machines. We recently updated the list of popular addresses, which might explain why it stopped working after pulling a newer version. You should be able to confirm this is the cause once you set a higher verbosity level.

If that's indeed the case, then it should be totally fine to turn off the pre-caching for personal use. You can do this by removing --precache-scripts /srv/explorer/popular-scripts.txt from contrib/runits/electrs.runit and re-building the docker image. (yes, this really isn't ideal -- I'll look into adding a simpler way to disable precache without having to manually build the docker image)

@shesek
Copy link
Collaborator

shesek commented Mar 22, 2020

Ah, there's actually an easier way to disable precaching that doesn't require building the image - mount an empty into /srv/explorer/popular-scripts.txt. Something like this:

$ touch /tmp/empty
$ docker run -v /tmp/empty:/srv/explorer/popular-scripts.txt ... bash -c "/srv/explorer/run.sh bitcoin-mainnet explorer verbose"

Definitely not ideal either, I will add a simpler option to turn this off with an environment variable.

@shesek
Copy link
Collaborator

shesek commented Mar 22, 2020

619ff48 added support for setting -e NO_PRECACHE=1 to disable pre-cache. (but note that this is not yet live on docker hub)

@demian85
Copy link

So, what is the fix? I still running into error HTTP 502 and not sure what to do. Increasing verbosity only results in too many unreadable logs like 2-2021-03-17T16:50:04.638+00:00 - TRACE - skipping block 0000000000000000115a540d0bc3a90cfc7c6df0d8d97953e3891866726e3ba4

@shesek
Copy link
Collaborator

shesek commented Mar 17, 2021

@demian85 Can you share some more of your logs? (making sure to remove any sensitive information)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants