-
Notifications
You must be signed in to change notification settings - Fork 9
Fix: delete pg auction #576
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Skipped Deployment
|
@@ -0,0 +1 @@ | |||
CREATE INDEX auction_creation_time_idx ON auction (creation_time); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is going to be a big migration, but it may be necessary to efficiently delete rows from auction long-term. we could optionally make this query CREATE INDEX CONCURRENTLY
which allows the index to be created non-atomically and without blocking writes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes we need to do it concurrently. I think it also needs a 30% more space than the original table. So If the auction is 118gb we need to have at least 40 gb available on the DB.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the query fails, we need to first remove the index (concurrently) and then rerun the query again
@@ -0,0 +1 @@ | |||
CREATE INDEX auction_creation_time_idx ON auction (creation_time); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes we need to do it concurrently. I think it also needs a 30% more space than the original table. So If the auction is 118gb we need to have at least 40 gb available on the DB.
@@ -0,0 +1 @@ | |||
CREATE INDEX auction_creation_time_idx ON auction (creation_time); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the query fails, we need to first remove the index (concurrently) and then rerun the query again
let n_auctions_deleted = sqlx::query!( | ||
"WITH rows_to_delete AS ( | ||
SELECT id FROM auction WHERE creation_time < $1 LIMIT $2 | ||
) DELETE FROM auction WHERE id IN (SELECT id FROM rows_to_delete)", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bids has protected foreign key to this table. So if the connected bid to this auction exists, this query will fail to delete those rows :-?
Can you test the behaviour on your local?
I think even if one of the rows fail to delete, postgres would throw error and no rows will be deleted
This PR addresses the growing size of the auction postgres table. In order to manage this, we can delete historical rows from the auction PG table similar to how we do for the bid and opportunity tables. This PR adds a
tx_hash
column to the analytics bid tables and inserts that value as part of theadd_bid_analytics
method. It also adds a deletion loop for the auction postgres table.