Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
The main goal of the commit is to migrate the data of v1 transactions to the v2 index. v1 transactions constitute > 400 GiB of the current weave. In SPoRA it is crucial to access any historical chunk by offset as fast as possible therefore it is necessary to move the historical data to the v2 index and also store fresh v1 data there. The header syncing process is extended to manage the migration - it traverses the historical headers, builds and records the data roots (then we can reuse the existing interface of submitting data chunks without knowing their absolute offsets - ar_data_sync:add_chunk_async/1), and moves the data to the v2 index, including some early v2 data stored in the tx_data files. Block and transaction headers and data of the abandoned forks are cached in memory now. The change was motivated by the need to cache chunks of v1 data from the orphaned blocks while retaining access to them (previously, this data was a part of the tx header cached on disk). Naturally, it comes with the reduced space amplification - the orphans no longer take up disk space (and there were no cleanup process in place)! It comes at a cost of extra RAM usage required for the cache, what should be acceptable considering SPoRA is going to be very RAM-heavy anyway. Sync block headers from latest to oldest. Syncing in the random order is the artifact of the times when the protocol used recall blocks instead of recall data chunks for proving access. Record synced block headers - allows to get rid of traversing the block index on every startup. Moreover, recorded block headers will make it easy to clean up the old headers to free up space for the new headers when the limited disk space feature is introduced. The initialization of the data syncing process and the process managing the wallet trees is streamlined. The change is motivated by the desire to make the header syncing process initialization work the same way the data syncing initialization process works as these two are similar in nature. Data roots of the transactions fetched during joining are recorded in the v2 index so that fetched v1 data can be uploaded to the v2 index via ar_data_sync:add_chunk_async/1. Additionally, 2.0 hashes for all 1.0 blocks are checked out so that we can do safe and quick verification of the inclusion of the pre-2.0 blocks.
- Loading branch information