-
Notifications
You must be signed in to change notification settings - Fork 736
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for partitioned tables #683
Comments
What kind of partitioning strategy are you using in this case? |
It's the new style "declarative" partitioning (EDIT: described here). More info:
|
I dont have much experience with partitioning in Postgres, do you think you can submit a PR with a test setup? Something that could emulate your use case and save me time on R&D. And just FYI, this is the file pgweb uses to lookup tables/functions/etc https://github.com/sosedoff/pgweb/blob/master/pkg/statements/sql/objects.sql - so maybe we could support partitioning with minimal changes. LMK |
By "test setup", would a script that prepares a partitioned table, be enough? Maybe that also populates it? Where should it be posted? At the present time it's difficult to steal time for looking into the internals of postgres and pgweb, so a proper pull request changing the logic in |
Yes, by test setup i mean a PR with instructions to create tables/partitions + bonus if they have data. You can reference a gist if you want as well, the medium does not matter much. Im pretty short on time these days too, so your best bet is to provide as much setup instructions as you can. |
Create the virtual (parent) table: CREATE TABLE test_runs (
testrun_id bigint NOT NULL GENERATED BY DEFAULT AS IDENTITY,
testname_id int NOT NULL,
exitcode int,
duration int,
PRIMARY KEY (testname_id, testrun_id)
) PARTITION BY RANGE (testname_id); Create 1000 partitions to hold values of CREATE EXTENSION plpython3u;
DO $$
start = 0
fin = 20000
step = 20
for n in range(start, fin, step):
nmax = n + 20
stmt = f'''
CREATE TABLE test_runs__PART_max{nmax}
PARTITION OF test_runs
FOR VALUES FROM ({n}) TO ({nmax})
'''
plpy.info(stmt)
plpy.execute(stmt)
$$ LANGUAGE plpython3u; To populate, we insert 1M random values that will be automatically distributed into the partitions: INSERT INTO test_runs (testname_id, exitcode, duration)
SELECT s % 20000, round(random()), round(random()*30)
FROM generate_series(0,999999) AS s; |
@sosedoff would you be willing to accept PRs which adds this support? |
I would not be super opposed to it if the feature is done in a generic way, without affecting the majority of the users of this tool that rely on simplicity and portability (myself included, i really dont want to turn this into pgadmin clone). Another important aspect is if you're working on the feature for your own consumption and understand the use case well, since i mentioned that i don't deal with partitioning often and can't justify the need in the first place. |
@sosedoff I'll keep this in mind. Right now, there are two different problems:
I think the first would be a very simple PR, second is more subjective and I'll make sure to post the approach here before starting development -- but it'll require me to understand the structure of the code a bit, so may take a few weeks. |
Regarding point 1 -- fair enough, i think its reasonable to fix the UI so at least folks can view the tables. Not the best UX, but its better than non-functioning one. Dont have much to say on point 2 -- feel free to experiment and have fun! |
I have a huge table of 10G rows. It is split into 1000 partitions - almost half of those are empty to be ready for future data influx. Problems:
tables
drop-down on the sidebar.tables
it lists the 1000 tables that are the partitions, which makes the whole sidebar requiring a lot of scrolling for working with other tables.A few ideas to handle this:
tables
drop-down on the sidebarpartitions
.Previously, I had the table unpartitioned and in general I was very happy with pgweb's performance and responsiveness with the billions-rows table (except some VIEWs hanging forever). After I partitioned the table, the overall performance of the database went up, so I expect pgweb to handle it even better, when support for partitions is implemented. Thanks!
The text was updated successfully, but these errors were encountered: