Skip to content

Conversation

@rjl493456442
Copy link
Member

It's an alternative of #32661

success int
fail int
)
for addr, list := range txs {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could randomize the order in which we traverse addresses. The advantage would be that at the system level, different nodes would convert different blobs, sharing the load, resulting in a faster overall conversion of the pending part of the distributed mempool.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although we use a map, so it is already randomized. Might be enough.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, the randomness from map should be sufficient

Comment on lines 924 to 927
// Remove the blob transaction regardless of conversion succeeds or not
if err := p.store.Delete(id); err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In case of the happy path, wouldn't it be better to remove just before re-adding the converted version? Not sure it has any significance, since the v0 version is useless by the time we start converting, just thinking aloud.

@rjl493456442 rjl493456442 force-pushed the fork-boundary-conversion-2 branch from 38c62f1 to bca071a Compare September 20, 2025 02:28
// Deep copy all indexed transaction metadata.
all := make(map[common.Address]map[uint64]*blobTxMeta)
for sender, txs := range p.index {
all[sender] = make(map[uint64]*blobTxMeta)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a bit dangerous to hold the metadata pointer directly.

There are a bunch of fields tracked in the metadata and some of them are volatile, such as evictionExecTip.

I would prefer to only track the necessary fields, like

	// Deep copy all indexed transaction metadata.
		var (
			ids = make(map[common.Address]map[uint64]uint64)
			txs = make(map[common.Address]map[uint64]common.Hash)
		)
		for sender, list := range p.index {
			ids[sender] = make(map[uint64]uint64)
			txs[sender] = make(map[uint64]common.Hash)
			for _, m := range list {
				ids[sender][m.nonce] = m.id
				txs[sender][m.nonce] = m.hash
			}
		}

addwaitHist.Update(time.Since(waitStart).Nanoseconds())
defer p.lock.Unlock()
defer func(start time.Time) { addtimeHist.Update(time.Since(start).Nanoseconds()) }(time.Now())
errs[i] = p.add(tx)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can define a function addLocked, as a wrapper of add with lock management.

e.g.,

// addLocked is the wrapper of add with the mutex held.
func (p *BlobPool) addLocked(tx *types.Transaction) error {
	// The blob pool blocks on adding a transaction. This is because blob txs are
	// only even pulled from the network, so this method will act as the overload
	// protection for fetches.
	waitStart := time.Now()
	p.lock.Lock()
	addwaitHist.Update(time.Since(waitStart).Nanoseconds())
	defer p.lock.Unlock()

	defer func(start time.Time) {
		addtimeHist.Update(time.Since(start).Nanoseconds())
	}(time.Now())

	return p.add(tx)
}

)
// Remove the transaction from the pool's index
if last {
if len(p.index[from]) == 1 {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We blindly remove the last transaction from the queue, without validation if the removed tx is the specified one.

It's very fragile as the new txs from the same sender can be added into the queue during the conversion

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And also, you are converting the txs with nonce-incremental order, so the tx will be mismatched for sure


evictionExecFeeDiff := tail.evictionExecFeeJumps - drop.evictionExecFeeJumps
evictionBlobFeeDiff := tail.evictionBlobFeeJumps - drop.evictionBlobFeeJumps
evictionExecFeeDiff := tail.evictionExecFeeJumps - tx.evictionExecFeeJumps
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no guarantee the tx being removed is the old tail.

The evictionExecFeeDiff should be computed based on the old tail and new tail. The evictheap is operated on the tail tx

if err := p.store.Delete(drop.id); err != nil {
log.Error("Failed to drop evicted transaction", "id", drop.id, "err", err)
}
return from, drop.nonce, drop.id
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These fields are not used anymore

}
// Now atomically swap the old sidecar with the converted sidecar.
p.lock.Lock()
p.remove(m, addr)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if the tx is failed to be added after the removal?
We will have a nonce gap in the middle

delete := func() {
p.lock.Lock()
defer p.lock.Unlock()
if err := p.store.Delete(m.id); err != nil {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is no guarantee the slot is still occupied by the tx being converted.

It's, in theory possible the old tx was evicted and the slot was assigned to a new tx since the p.store.Get(m.id)

@fjl
Copy link
Contributor

fjl commented Sep 24, 2025

Closing because we merged #32716 instead.

@fjl fjl closed this Sep 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants