Skip to content

feat: user batch support#1846

Open
Mirko-von-Leipzig wants to merge 9 commits intomirko/mempool-tx-revertingfrom
mirko/mempool-user-batches
Open

feat: user batch support#1846
Mirko-von-Leipzig wants to merge 9 commits intomirko/mempool-tx-revertingfrom
mirko/mempool-user-batches

Conversation

@Mirko-von-Leipzig
Copy link
Copy Markdown
Collaborator

@Mirko-von-Leipzig Mirko-von-Leipzig commented Mar 26, 2026

This PR is the third and final part of the mempool refactoring PR stack. Part 1 (#1820) performs the broad mempool refactoring to simplify this PR. Builds on part 2 (#1832).

Batch submissions must include their transaction inputs since we currently require this for the validator to verify them before inclusion in a block. This PR abuses this by treating the batch as a set of normal transactions at the mempool level. This simplifies the mempool implementation, which is currently built around a DAG of transactions - so having to insert a batch directly would be more complex. This will need to change once we stop requiring transaction inputs as part of the validator; but it won't be too bad.

The way this is implemented here, is that the transaction DAG tracks user batches and ensures that when a batch is selected, that transactions from user batches are not mixed with conventional transactions. That is, select_batch outputs either a user batch, or a conventional batch.

Effectively, the transaction DAG internally ensures that the user batch's transactions remain coherent even though the batch has been deconstructed into individual transactions. The benefit is that this doesn't require any major structural changes to the mempool. The rest of the mempool then treats the user batch as per normal.

Closes #1112

@Mirko-von-Leipzig Mirko-von-Leipzig force-pushed the mirko/mempool-user-batches branch from bf86aec to 51e74e4 Compare March 26, 2026 16:29
@Mirko-von-Leipzig Mirko-von-Leipzig force-pushed the mirko/mempool-user-batches branch from 51e74e4 to 6dd5f53 Compare March 26, 2026 16:31
// Encoded using [winter_utils::Serializable] implementation for
// [miden_protocol::transaction::proven_tx::ProvenTransaction].
bytes encoded = 1;
message TransactionBatch {
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The suggestion here I think was to re-add the proven_batch property, and have the others be optional so we can drop them at some point.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is probably fine for now, but in the next release, we'll need to change this to something like ProvenTransactionBatch which would contain the ProvenBatch struct + optional data that would allow us to re-execute all transactions in the batch and validate that the proven batch is correct (similar to how we do it for proven transactions).

@Mirko-von-Leipzig Mirko-von-Leipzig marked this pull request as ready for review March 26, 2026 16:34
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@PhilippGackstatter if you could throw an eye on the process here to ensure I'm checking the correct things.

The state itself is checked in the mempool, so here we really just want to ensure that the batch and its transactions are valid and the reference block is correct iiuc.

Comment on lines +440 to +445
let reference_commitment: Word = reference_header
.chain_commitment
.expect("store should always fill block header")
.try_into()
.expect("store Word should be okay");
if reference_commitment != proof.reference_block_commitment() {
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this correct?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it is, here chain_commitment is the root of the blockchain MMR, whereas the reference_block_commitment sounds more like it's the block hash.

&mut self,
txs: &[Arc<AuthenticatedTransaction>],
) -> Result<BlockNumber, MempoolSubmissionError> {
assert!(!txs.is_empty(), "Cannot have a batch with no transactions");
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just checking we want to crash here instead of return error

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assumed that one cannot build a ProvenBatch without one, so this would indicate an internal bug somewhere. But maybe that's a poor assumption.

}

pub fn select_batch(&mut self, budget: BatchBudget) -> Option<SelectedBatch> {
self.select_user_batch().or_else(|| self.select_conventional_batch(budget))
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might want some doc comments to make it clear that budget is intended to only relevant for conventional batches.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also are we OK with user batches always taking priority over conventional here?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also wondering if we need to prevent user batches of size 1 (or some other limit). Unsure if that is relevant to this PR just a general thought.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also are we OK with user batches always taking priority over conventional here?

I'm unsure, but at the moment it doesn't matter much. If its a concern we can make it random -- I was thinking maybe that's best.

Also wondering if we need to prevent user batches of size 1 (or some other limit). Unsure if that is relevant to this PR just a general thought.

Good question. I'm unsure 😬 I wonder if that makes some user loop more difficult i.e. they always submit user batches, but sometimes they don't have many transactions to bundle..

Probably we would want some limit even in the future? cc @bobbinth

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also are we OK with user batches always taking priority over conventional here?

Would having fees solve this?

Copy link
Copy Markdown
Collaborator

@igamigo igamigo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! I left some mostly minor, non-blocking comments. Not sure if you tested already but jsut in case, I'm going to run the client integration tests and report back.

/// graph.
pub fn prune(&mut self, batch: BatchId) {
self.inner.prune(batch);
pub fn prune(&mut self, batch: BatchId) -> SelectedBatch {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Would be nice to update the doc comments to say that this returns the pruned batch now

/// # Panics
///
/// Panics if this node has any ancestor nodes, or if this node was not selected.
pub fn prune(&mut self, id: N::Id) {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here

}

pub fn select_batch(&mut self, budget: BatchBudget) -> Option<SelectedBatch> {
self.select_user_batch().or_else(|| self.select_conventional_batch(budget))
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also are we OK with user batches always taking priority over conventional here?

Would having fees solve this?

.expect("bi-directional mapping should be coherent");

for tx in txs {
let Some(tx) = self.inner.selection_candidates().get(&tx).copied() else {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this calls for a get_selection_candidate()? So that you avoid re-allocating the map every iteration through selection_candidates()

Comment on lines +171 to +172
// Select arbitrary candidate which is _not_ part of a user batch.
let candidates = self.inner.selection_candidates();
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not important now and I know this was pre-existing but re-allocating on every iteration here as well is probably not ideal

Comment on lines +365 to +366
// We assume that the rpc component has verified everything, including the transaction
// proofs.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit, feel free to disregard: this is a bit redundant considering the safety comment near the end of the function

.await?
.into_inner()
.block_header
.expect("store should always send block header");
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is true for blocks beyond the chain tip, so should this be an error?

Comment on lines +440 to +445
let reference_commitment: Word = reference_header
.chain_commitment
.expect("store should always fill block header")
.try_into()
.expect("store Word should be okay");
if reference_commitment != proof.reference_block_commitment() {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it is, here chain_commitment is the root of the blockchain MMR, whereas the reference_block_commitment sounds more like it's the block hash.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants