You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Postgres native partitioning might be useful when there are millions of rows in the object table. If we partition by bucket_id, all API operations can just operate on a single partition.
Another advantage is one huge bucket shouldn't affect the query performance of another bucket.
I think we can use the list partition. New partitions can be dynamically created when new buckets are created (in the API code or via triggers). Dropping a bucket becomes simple since we can just delete the object partition table belonging to that bucket_id.
Query planning takes longer and more memory if there are more partitions. Will this be a problem if there are 1000s of buckets?
This is probably worth exploring after we have a proper benchmark in place.
The text was updated successfully, but these errors were encountered:
Postgres native partitioning might be useful when there are millions of rows in the object table. If we partition by bucket_id, all API operations can just operate on a single partition.
Another advantage is one huge bucket shouldn't affect the query performance of another bucket.
I think we can use the list partition. New partitions can be dynamically created when new buckets are created (in the API code or via triggers). Dropping a bucket becomes simple since we can just delete the object partition table belonging to that bucket_id.
Query planning takes longer and more memory if there are more partitions. Will this be a problem if there are 1000s of buckets?
This is probably worth exploring after we have a proper benchmark in place.
The text was updated successfully, but these errors were encountered: