Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Growing table powa_statements_src_tmp #119

Closed
Alexise23 opened this issue Dec 2, 2020 · 15 comments
Closed

Growing table powa_statements_src_tmp #119

Alexise23 opened this issue Dec 2, 2020 · 15 comments

Comments

@Alexise23
Copy link

Alexise23 commented Dec 2, 2020

Hi,
I have powa v 4 configured with 4 Postgresql servers and dedicated repository database server. The powa_statements_src_tmp table is growing by a size of 44 GB, how can I reduce the size? , can I manually truncate?.

Thanks in advance

@banlex73
Copy link

banlex73 commented Dec 2, 2020 via email

@Alexise23
Copy link
Author

no, thank you very much

@rjuju
Copy link
Member

rjuju commented Dec 10, 2020

Is the table still growing, or did the size stabilized? How much retention did you configure for the servers?

Do you see any logs on the repository server and/or powa-collector?

This table should almost always be empty, as the data is only there the time needed to perform a snapshot, and should probably not even be visible outside of the powa-collector transaction. Can you give the result of the following query?

select srvid, ts, count(*) from powa_statements_src_tmp group by 1, 2;

If needed you can truncate this table, the worse that could happen is that you lose data for a single snapshot, for one or multiple remote servers.

@Alexise23
Copy link
Author

Hi rjuju,
The size table stabilized, another question , can I truncate manually the table powa_all_relations_src_tmp?

Thanks in advance

@rjuju
Copy link
Member

rjuju commented Jan 4, 2021

Hi @Alexise23. As I said nothing really bad should happen by truncating this table, but if you can provide the result of the mentioned query, I'll know if there's a bug (like leaked rows or something) that need to be fixed, or just autovacuum concerns that should be properly documented.

@Alexise23
Copy link
Author

Hi rjuju , The result of the mentioned query is zero rows, but there is another table that is growing: powa_all_relations_src_tmp , can I truncate it manually ?

@rjuju
Copy link
Member

rjuju commented Jan 4, 2021

Oh right, I'm sorry I didn't realize it was a different table. It's also safe to truncate that table, but it would be nice to know if there are any rows, something like:

select srvid, ts, count(*) from powa_all_relations_src_tmp group by 1, 2;

@Alexise23
Copy link
Author

Hi rjuju,
The result the mentioned query is zero rows.

Thanks

@rjuju
Copy link
Member

rjuju commented Jan 5, 2021

Thanks a lot for checking!

So, since there is no apparently leakage of orphan rows, you could even go with a VACUUM FULL, that should be almost instant, without any risk of removing unwanted rows.

It also seem that autovacuum is definitely not doing its job fast enough given the amount of bloat. If you don't change anything this is likely to happen again. Can you try to make autovacuum more aggressive, and/or lower the threshold for the powa tables?

Also, do you have an estimation of the number of rows for the pg_stat_all_tables and pg_stat_all_indexes on all remote servers? And do you have any idea on how fast the table size grew on the *_src_tmp tables?

@Alexise23
Copy link
Author

Alexise23 commented Jan 5, 2021

Hi rjuju ,
Set up a vacuum aggressive and analyze for two tables that have bloat, the result of rows for pg_stat_all_tables. = 181 rows and for the pg_stat_all_indexes = 229 rows, after of FULL VACUUM for table powa_statements_src_tmp grow to size 10 GB of bloat in less 24 hours, then forces me to perform vacuum aggressive because the bloat is caused by constant DELETE of tuples and this was filling the storage of server.

Thanks rjuju

@banlex73
Copy link

banlex73 commented Jan 6, 2021 via email

@rjuju
Copy link
Member

rjuju commented Feb 6, 2021

@Alexise23 sorry for the late answer. What exact configuration did you use for more aggressive vacuum on those 2 tables? Did you notice any message in the logs for autovacuum on those tables (like regularly canceled task due to conflicting lock for instance)?

@banlex73
Copy link

banlex73 commented Feb 6, 2021 via email

@rjuju
Copy link
Member

rjuju commented Feb 6, 2021

@banlex73 true, but if I understand correctly this wasn't the number of rows in the table but the estimated number of rows stored in statistics, so there may not be an issue on that side.

@rjuju
Copy link
Member

rjuju commented Feb 4, 2022

Since there's no activity here anymore I'm closing this issue. Feel free to reopen if needed.

@rjuju rjuju closed this as completed Feb 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants