Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jp/merge master to cm5 #470

Merged
merged 56 commits into from Sep 27, 2023

Conversation

JamesPiechota
Copy link
Collaborator

No description provided.

hlolli and others added 30 commits September 27, 2023 14:46
in the io threads. This ensures that we don't continue to read
stale recall ranges after a session reset.
…456)

* Ensure the genesis txs are copied into the data_dir on node launch.

* delay the copy until we're sure the data/txs directory exists

* Log a warning if data/genesis_txs not found
* feat(nix): decouple arweave config generator

* feat(nix): decouple arweave module options

* fix: jiffy compilation bug

* fix(nix): broken config.json path
Additionally, relax the validation rules a bit for the sake of
simplicity (keep requiring every chunk to start at its own unique 256
KiB bucket except for the very last chunk, which may start at the
previous bucket but must step at least 1 byte into its own bucket).

Co-authored-by: JamesPiechota <piechota@gmail.com>
- Account for the actual observed VDF and block times;
- adjust the estimated replica count based on the observed two-chunk
  solutions count;
- retarget the VDF difficulty to make it always take about 1 second;
- postpone the pricing transition start by 4 month;
- do the price transition on the price_per_gib_minute level instead of
  the TX fee level to avoid pending transactions dropping as fees rise.

Co-authored-by: Lev Berman <ldmberman@proton.me>
Also, set the 2.7 fork at 1237300, price transition at 1237760.
- Store block index updates every block instead of rewriting the entire ever growing list every 50 blocks;
- skip the latest blocks when failing to find the block or transaction headers or the account tree data and attempt to start from an earlier block (add the `start_from_latest_state` CLI flag);
- give the possibility to start from the given block (add the `start_from_block <hash>` CLI parameter).
- scripts to start/stop the various testnet nodes
- allow a few more defines to be redfined
- setup rebar.config so that the testnet runs correctly
- coment rebar.config
* Update the VDF retargeting logic to follow this flow:
1. When the entropy reset line is crossed: vdf_difficulty = next_vdf_difficulty
2. When publishing a new block if the block is at a VDF retarget height, compute a new next_vdf_difficulty value

Remove #block.vdf_difficulty and only use #block.nonce_limiter_info.vdf_difficulty
and #block.nonce_limiter_info.next_vdf_difficulty
)

* Ensure that the VDF client only processes each VDF step once, even
if the server sends the same step multiple times.

Specifically: handle the case where the Server has been pushing steps
for a "stale" SessionKey (i.e. the entropy reset line has passed, but
the old SessionKey continues to be used until the VDF server processes
a new block).

This overlap in session caching is intentional.  Before a new block is
found steps keep being written to the previous session (identified by
{NextSeed, IntervalNumber}) and once a block is found, the steps above
the reset line, if any, are added to the new session (with the new NextSeed).

The intention of the overlap is to quickly access the steps when validating
B1 -> reset line -> B2 given the current fork of B1 -> B2' -> reset line ->
B3 i.e. we can query all steps by B1.next_seed even though on our fork the
reset line determined a different next_seed for the latest session.
…ulty when processing the nonce_limiter_info from a validated block (#460)

* Refactor the code for updating the cached VDF Session, and ensure
we always update the cached vdf_difficulty and next_vdf_difficulty when
processing the nonce_limiter_info from a validated block

* Update testnet scripts to allow some nodes to be solo VDF

* Fix tests. Whenever starting a node with a modified config, that
config is written to disk. The default/unmodified config needs
to be restored at the end of the test.

---------

Co-authored-by: James Piechota <piechota@jamess-mbp.lan>
…462)

* All chunks in interior subtrees must be aligned to chunk boundaries
Previously the check to enforce that only considered the local, sub-tree
offset which allowed each subtree to contain 1 or 2 unaligned chunks
(rather than 1 or 2 unaligned chunks per transaction)

* Add scripts to allow restarting the testnet individually

---------

Co-authored-by: James Piechota <piechota@jamess-mbp.lan>
- use the new price after the transition period ends
- shift the transition period by 1 so that
  ?PRICE_2_6_8_TRANSITION_START is not interpolated
- remove a div in the new price calculation which could cause the
  price to go to 0 if there was an outage preventing miners from
  computing VDF for some period of time

Add comments explaining the replica count estimate

Add several unit tests for the price transition logic
…ers (#464)

* Initialize ar_poller.in_sync_trusted_peers with the set of
trusted peers so that we only remove a peer when it is truly out
of sync. Previously the set was initialized empty which meant
a peer was only added if it went out of sync (due to the 5 minute
timeout). This created a situation where a singl out of sync peer
could cause the "Out of sync" console message to be printed.

By initializing the set with all trusted peers, the message is
only printed when all peers go out of sync

* Change the testnet network to arweave.2.7.testnet so that
old nodes from the 2.6 testnet days don't accidentally interfere

* Allow testnet nodes to cleanup up orphaned data
… line (#465)

When validating steps between two blocks that cross an entropy reset line, use the VDF difficulty from the previous block to validate the steps up to the reset line, and the difficulty from the current block to validate the steps after the reset line
a valid index to be deleted when OrphanCount was 1
update rebar.config values for testnet
JamesPiechota and others added 25 commits September 27, 2023 14:46
Targeting October, 4, 2023 at 14:00 UTC
Co-authored-by: Lev Berman <ldmberman@proton.me>
Co-authored-by: Amin Arria <arria.amin@gmail.com>
Co-authored-by: Esteban Dimitroff Hódi <esteban.dimitroff@entropy1729.com>
Refactor ar_mining_server.erl and ar_coordination.erl:
1. Intrdoduce 2 records to encapsulate the long argument lists (ar_mining_candidate, and ar_mining_solution)
2. Introduce common functions to remove duplicate code

This is a first pass and tests fail. Next commits will fix tests and add more.
Some small bug fixes, and also better session management when computing H2 for a peer (i.e.
assume we have a valid mining session if the request came from a peer)
…ar_test_node.erl and bin/test.

Instead of using test_on_fork(), tests can use ar_test_node:test_with_mocked_function()

Also get rid of a bunch of debugging output
Fix off-by-one error when selecting partitions to mine
is calculatd automatically. Previously that logic was tied to free
memory which can vary significantly and can cause the limit to be
set too low. With this PR it is tied to total memory which is a
stable value
from mining. Since that partition may be incomplete it provides
a mining advantage (e.g. it can fit in RAM), and we don't want
to over-incentivize syncing the last partition.
are calibrated for that number). Tests that
need a different size (e.g. ar_mining_io_tests, ar_mining_server_tests)
can set the storage modules explicitly.
it from the ar_mining_server critical path
@JamesPiechota JamesPiechota merged commit 9fc2282 into feature/coordinated-mining_v5 Sep 27, 2023
1 of 2 checks passed
JamesPiechota added a commit that referenced this pull request Sep 27, 2023
merge master branch into feature/coordinated-mining-v5
JamesPiechota added a commit that referenced this pull request Oct 13, 2023
merge master branch into feature/coordinated-mining-v5
JamesPiechota added a commit that referenced this pull request Oct 16, 2023
merge master branch into feature/coordinated-mining-v5
JamesPiechota added a commit that referenced this pull request Oct 21, 2023
merge master branch into feature/coordinated-mining-v5
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants