Skip to content

Commit

Permalink
This includes three fixes to header handling.
Browse files Browse the repository at this point in the history
- Allow ElectrumSV to start up when there are no headers in the header store (likely only the Genesis block
  and no known blocks mined on regtest as yet).
- Remove the ill conceived workaround for the bug in Python pickling when deserialising headers. The reason
  to avoid this workaround and others like it is we have no idea if we are repairing the only problem that
  could be caused by this bug. Better to classify it as not our problem and blame upstream Python.
- Attempt to address the long standing problem with headers file corruption (see Github issue #1043). The
  Python mmap documentation clearly states that `flush` must be called and for some reason we do not call it.
  Now we call both `flush` and `close` on the underlying mmap object. It remains to be seen if this will fix
  the problem, but exiting does not error so it does not hurt.
  • Loading branch information
rt121212121 committed Mar 6, 2023
1 parent 15afbbe commit d47b306
Show file tree
Hide file tree
Showing 2 changed files with 24 additions and 11 deletions.
21 changes: 18 additions & 3 deletions electrumsv/app_state.py
Original file line number Diff line number Diff line change
Expand Up @@ -299,10 +299,25 @@ def connect_out_of_band_header(self, header_bytes: bytes) -> tuple[Header | None
return header, chain

def on_stop(self) -> None:
self._write_header_cache()

def _write_header_cache(self) -> None:
# The headers object may not be created for command-line invocations that do not require it.
if self.headers is not None:
logger.debug("flushing headers")
# `mmap.flush` exceptions (The Python developers do not document these):
# Raises `ValueError` for invalid offset and size arguments relating to type and range,
# and if `flush` is not supported on `UNIX` (MacOS inclusive) or `MS_WINDOWS`
# operating systems.
# Raises `OSError` for specific OS-related Unix and Windows errors.
# We do not catch `ValueError` as we do not pass arguments and we do not support
# operating systems without `mmap.flush` support. We do not catch `OSError` as this
# is representative of problems that are so severe the user's wallet (at the least
# the headers file) is likely corrupted and we should never hide this.
self.headers.flush()
# We close the header storage to prevent further writes. These should never happen but
# we want to avoid past problems which seem to be where unflushed writes are lost
# and the header file corrupted.
# rt12: 20230306 bitcoinx does not currently expose this.
self.headers._storage.close()

logger.debug("writing header cache")
write_cached_headers(self.headers)

Expand Down
14 changes: 6 additions & 8 deletions electrumsv/cached_headers.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@

from bitcoinx import CheckPoint, Headers, Network

from .constants import EMPTY_HASH
from .logs import logs


Expand Down Expand Up @@ -63,14 +64,11 @@ def read_cached_headers(coin: Network, file_path: str, checkpoint: CheckPoint) -
# attributes are missing. This data works for the same deterministic
# Python version and dependency installation, just not on my computer.
# This is very likely a bug in pickle that is trigged by my arch.
logger.error("Error deserialising chain tip header (chain first height: "
"%d); forceably replacing it as a workaround", chain.first_height)
header_index = chain._header_indices[-1]
chain.tip = headers.network.deserialized_header(
headers._storage[header_index], -1)
prev_header, prev_chain = headers.lookup(chain.tip.prev_hash)
chain.tip.height = prev_header.height + 1
else:
logger.error("Error deserialising tip header (chain first height: %d)",
chain.first_height)
raise

if chain.tip.prev_hash != EMPTY_HASH:
# Validate that the tip deserialised well enough to have the right height.
prev_header, prev_chain = headers.lookup(chain.tip.prev_hash)
assert chain.tip.height == prev_header.height + 1
Expand Down

0 comments on commit d47b306

Please sign in to comment.