-
Notifications
You must be signed in to change notification settings - Fork 837
Continuous Staking WIP - Restake permissionless stakers by default #1389
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
e9e6175 to
845b5bf
Compare
joshua-kim
pushed a commit
to joshua-kim/avalanchego
that referenced
this pull request
Apr 28, 2023
Co-authored-by: abenegia <alberto.benegiamo@gmail.com>
1ad8ca7 to
38fccba
Compare
|
This PR has become stale because it has been open for 30 days with no activity. Adding the |
|
This PR has become stale because it has been open for 30 days with no activity. Adding the |
Contributor
Author
|
superseeded by different design |
maru-ava
pushed a commit
that referenced
this pull request
Dec 3, 2025
Fix a critical bug where nodes were written to storage before the freelist was updated, leaving the database in an inconsistent state if an I/O error occurred during node persistence. Problem: When persisting nodes, if the freelist flush happened after node writes, a crash or I/O error between node writes and freelist flush would leave allocated storage space untracked in the freelist. On recovery, these areas could be reallocated, leading to data corruption or loss. Solution: - Flush freelist immediately after allocating node addresses but before writing any node data - Add NodeAllocator::flush_freelist() to enable explicit freelist updates - Add flush_freelist_from() to persist a specific header's freelist state Additional improvements: - Refactor to use bumpalo bump allocator with bounded memory for node serialization (prevents unbounded memory growth) - Extract common serialization and batching logic into shared helper functions (serialize_node_to_bump, process_unpersisted_nodes) - Eliminate ~90 lines of code duplication between io-uring and non-io-uring paths while ensuring both use identical logic
maru-ava
pushed a commit
that referenced
this pull request
Dec 3, 2025
#1488 found a panic during serialization of a node, indicating that a child node was either not hashed or not allocated and therefore could not serialize the parent node. After a walk through of the code, I noticed that after #1389, we began serializing nodes in batches in order to prevent corrupting the free-list after a failed write and introduced a case where this could occur. Previously, we would serialize a node, write it to storage, and then mark it as allocated in one step. Now we serialize and allocate the node and add the node to a batch before moving onto the next node. Like before, we only mark the node as allocated after writing it to disk. This means that if the batch contained child and parent, the parent node fails to serialize because it cannot reference the address. This change refactors the allocation logic to mark nodes as allocated during batching. This ensures that if its parent is also in the batch, it can successfully serialize. This linear address itself is not read from by viewers of the `Allocated` state because the node is not considered committed until the entire write has completed (i.e., after the root and header are written). This means this change does not introduce any early reads of uncommitted data. In the course of writing tests, I discovered that the arena allocator returns the number of bytes by the arena, not the number of bytes occupied by items within the arena. This meant that the batching logic should have always generated a batch of 1 element. To fix this, I track the allocated length of each area. This successfully triggered the bug I discovered; however, this also means the bug might not be the source of the original panic.
JonathanOppenheimer
pushed a commit
that referenced
this pull request
Dec 3, 2025
* fix start timestamp * add client * rename sov to l1validator * bump avago to v1.11.13-rc.1 * update compatibility
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Why this should be merged
How this works
Post Continuous Staking fork activation:
6.Added stopStakerTx to explicitly stop staking. Duly handled delegators/subnet validators termination when their validator is requested to stop.
How this was tested
Updated UTs to test the latest fork + added UTs
Todo: e2e tests with ginkgo