Skip to content

Conversation

anihamde
Copy link
Contributor

This PR addresses the growing size of the auction postgres table. In order to manage this, we can delete historical rows from the auction PG table similar to how we do for the bid and opportunity tables. This PR adds a tx_hash column to the analytics bid tables and inserts that value as part of the add_bid_analytics method. It also adds a deletion loop for the auction postgres table.

Copy link

vercel bot commented Jul 26, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Skipped Deployment
Name Status Preview Comments Updated (UTC)
swap-staging ⬜️ Ignored (Inspect) Visit Preview Jul 29, 2025 5:19pm

@@ -0,0 +1 @@
CREATE INDEX auction_creation_time_idx ON auction (creation_time);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is going to be a big migration, but it may be necessary to efficiently delete rows from auction long-term. we could optionally make this query CREATE INDEX CONCURRENTLY which allows the index to be created non-atomically and without blocking writes

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes we need to do it concurrently. I think it also needs a 30% more space than the original table. So If the auction is 118gb we need to have at least 40 gb available on the DB.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the query fails, we need to first remove the index (concurrently) and then rerun the query again

@@ -0,0 +1 @@
CREATE INDEX auction_creation_time_idx ON auction (creation_time);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes we need to do it concurrently. I think it also needs a 30% more space than the original table. So If the auction is 118gb we need to have at least 40 gb available on the DB.

@@ -0,0 +1 @@
CREATE INDEX auction_creation_time_idx ON auction (creation_time);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the query fails, we need to first remove the index (concurrently) and then rerun the query again

let n_auctions_deleted = sqlx::query!(
"WITH rows_to_delete AS (
SELECT id FROM auction WHERE creation_time < $1 LIMIT $2
) DELETE FROM auction WHERE id IN (SELECT id FROM rows_to_delete)",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bids has protected foreign key to this table. So if the connected bid to this auction exists, this query will fail to delete those rows :-?
Can you test the behaviour on your local?

I think even if one of the rows fail to delete, postgres would throw error and no rows will be deleted

@anihamde anihamde merged commit 342df3c into main Jul 29, 2025
3 checks passed
@anihamde anihamde deleted the fix/delete-pg-auction branch July 29, 2025 17:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants