-
Notifications
You must be signed in to change notification settings - Fork 21.5k
core/txpool/blobpool: implement fork boundary conversion #32665
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core/txpool/blobpool: implement fork boundary conversion #32665
Conversation
core/txpool/blobpool/blobpool.go
Outdated
| success int | ||
| fail int | ||
| ) | ||
| for addr, list := range txs { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could randomize the order in which we traverse addresses. The advantage would be that at the system level, different nodes would convert different blobs, sharing the load, resulting in a faster overall conversion of the pending part of the distributed mempool.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although we use a map, so it is already randomized. Might be enough.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, the randomness from map should be sufficient
core/txpool/blobpool/blobpool.go
Outdated
| // Remove the blob transaction regardless of conversion succeeds or not | ||
| if err := p.store.Delete(id); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case of the happy path, wouldn't it be better to remove just before re-adding the converted version? Not sure it has any significance, since the v0 version is useless by the time we start converting, just thinking aloud.
38c62f1 to
bca071a
Compare
| // Deep copy all indexed transaction metadata. | ||
| all := make(map[common.Address]map[uint64]*blobTxMeta) | ||
| for sender, txs := range p.index { | ||
| all[sender] = make(map[uint64]*blobTxMeta) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a bit dangerous to hold the metadata pointer directly.
There are a bunch of fields tracked in the metadata and some of them are volatile, such as evictionExecTip.
I would prefer to only track the necessary fields, like
// Deep copy all indexed transaction metadata.
var (
ids = make(map[common.Address]map[uint64]uint64)
txs = make(map[common.Address]map[uint64]common.Hash)
)
for sender, list := range p.index {
ids[sender] = make(map[uint64]uint64)
txs[sender] = make(map[uint64]common.Hash)
for _, m := range list {
ids[sender][m.nonce] = m.id
txs[sender][m.nonce] = m.hash
}
}| addwaitHist.Update(time.Since(waitStart).Nanoseconds()) | ||
| defer p.lock.Unlock() | ||
| defer func(start time.Time) { addtimeHist.Update(time.Since(start).Nanoseconds()) }(time.Now()) | ||
| errs[i] = p.add(tx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can define a function addLocked, as a wrapper of add with lock management.
e.g.,
// addLocked is the wrapper of add with the mutex held.
func (p *BlobPool) addLocked(tx *types.Transaction) error {
// The blob pool blocks on adding a transaction. This is because blob txs are
// only even pulled from the network, so this method will act as the overload
// protection for fetches.
waitStart := time.Now()
p.lock.Lock()
addwaitHist.Update(time.Since(waitStart).Nanoseconds())
defer p.lock.Unlock()
defer func(start time.Time) {
addtimeHist.Update(time.Since(start).Nanoseconds())
}(time.Now())
return p.add(tx)
}| ) | ||
| // Remove the transaction from the pool's index | ||
| if last { | ||
| if len(p.index[from]) == 1 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We blindly remove the last transaction from the queue, without validation if the removed tx is the specified one.
It's very fragile as the new txs from the same sender can be added into the queue during the conversion
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And also, you are converting the txs with nonce-incremental order, so the tx will be mismatched for sure
|
|
||
| evictionExecFeeDiff := tail.evictionExecFeeJumps - drop.evictionExecFeeJumps | ||
| evictionBlobFeeDiff := tail.evictionBlobFeeJumps - drop.evictionBlobFeeJumps | ||
| evictionExecFeeDiff := tail.evictionExecFeeJumps - tx.evictionExecFeeJumps |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no guarantee the tx being removed is the old tail.
The evictionExecFeeDiff should be computed based on the old tail and new tail. The evictheap is operated on the tail tx
| if err := p.store.Delete(drop.id); err != nil { | ||
| log.Error("Failed to drop evicted transaction", "id", drop.id, "err", err) | ||
| } | ||
| return from, drop.nonce, drop.id |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These fields are not used anymore
| } | ||
| // Now atomically swap the old sidecar with the converted sidecar. | ||
| p.lock.Lock() | ||
| p.remove(m, addr) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if the tx is failed to be added after the removal?
We will have a nonce gap in the middle
| delete := func() { | ||
| p.lock.Lock() | ||
| defer p.lock.Unlock() | ||
| if err := p.store.Delete(m.id); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is no guarantee the slot is still occupied by the tx being converted.
It's, in theory possible the old tx was evicted and the slot was assigned to a new tx since the p.store.Get(m.id)
|
Closing because we merged #32716 instead. |
It's an alternative of #32661