Skip to content

Conversation

@MartinquaXD
Copy link
Contributor

@MartinquaXD MartinquaXD commented Nov 3, 2025

Description

Currently the autopilot assembles the contents of the auction and then persists it in the DB by replacing the current auction with the new one which also returns the ID for the new auction.
This is unfortunately pretty slow (~440ms on prod mainnet) and therefor significantly increases the delay between seeing a new block and sending the fully assembled auction to solvers.

Changes

What we can do instead is to introduce a new DB query that just increments and returns the auction id counter for the next auction. Afterwards we spawn a background task that writes the fully assembled auction to the DB and uploads it to S3.

The trade-off is that it's now possible that we fully run an auction that we don't have persisted anywhere. But in practice this does not happen (no instance in the last 30d) and this information is for debugging purposes anyway.

How to test

e2e tests

@MartinquaXD MartinquaXD requested a review from a team as a code owner November 3, 2025 21:16
@MartinquaXD MartinquaXD marked this pull request as draft November 3, 2025 21:22
@MartinquaXD MartinquaXD changed the title upload auction entirely in background persist current auction in background task Nov 3, 2025
@MartinquaXD MartinquaXD marked this pull request as ready for review November 3, 2025 22:14
Comment on lines 82 to 83
id: domain::auction::Id,
auction: &domain::RawAuctionData,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the variables could have better names, it's kind of hard to tell which is which by just looking at it. Is the ID the new one, or is it contained in AuctionData?

let mut ex = self.pool.acquire().await?;
let id = database::auction::replace_auction(&mut ex, &data).await?;
Ok(id)
database::auction::insert_auction_with_id(&mut ex, id, &data).await?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To supplement my point, I'd have to dive up to here to understand that idis the new auction ID

Comment on lines +106 to +107
Ok(key) => tracing::info!(?key, "uploaded auction to s3"),
Err(err) => tracing::warn!(?err, "failed to upload auction to s3"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add the auction id to the logs too? I'm not sure the error has it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The upload key already contains the auction id so I'd leave it as is.

Copy link
Contributor

@squadgazzz squadgazzz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That change looks a bit risky and awkward to me. How about introducing a simple table with an auction_id counter as the safest approach?

@fafk
Copy link
Contributor

fafk commented Nov 5, 2025

I think there is a potential issue with ordering - it could happen, however unlikely, that you spawn a taks T1, then T2, but T2 finishes before T1 and writes T1 writes over a newer auction. 🤔

@github-actions
Copy link

This pull request has been marked as stale because it has been inactive a while. Please update this pull request or it will be automatically closed.

@github-actions
Copy link

This pull request has been marked as stale because it has been inactive a while. Please update this pull request or it will be automatically closed.

@github-actions github-actions bot added the stale label Nov 22, 2025
@MartinquaXD
Copy link
Contributor Author

That change looks a bit risky and awkward to me. How about introducing a simple table with an auction_id counter as the safest approach?

In what way would having a new table be less risky than using the already existing counter? Or do you see the risk in moving the upload to a background task?
I agree that it's a bit awkward to split the id generation from the replacement operation but it's currently taking a good chunk of the available time and would be an easy win.

Other than ☝️ I (hopefully) improved the variable naming to make it more clear and addressed the comment about uploads happening out of order.

@github-actions github-actions bot removed the stale label Nov 28, 2025
Copy link
Contributor

@squadgazzz squadgazzz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was a long time ago, and I can't remember the case I was thinking about. Right now I don't really see any issue, since this operation is executed sequentially here

if let Some(auction) = self_arc
.next_auction(start_block, &mut last_auction, &mut last_block)
.await

Apologies for the back-and-forth.

@MartinquaXD MartinquaXD added this pull request to the merge queue Nov 28, 2025
Merged via the queue into main with commit fa336e5 Nov 28, 2025
18 checks passed
@MartinquaXD MartinquaXD deleted the persists-auction-in-background branch November 28, 2025 12:51
@github-actions github-actions bot locked and limited conversation to collaborators Nov 28, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants