From 486c9e914adcea0bacb9aae6e1c6582a1b046d39 Mon Sep 17 00:00:00 2001 From: fpagny Date: Tue, 29 Oct 2024 12:01:50 +0100 Subject: [PATCH 1/2] Update failing-backup-restore.mdx Adding troubleshooting section for automated backups blocked by long running queries in Serverless SQL Database. --- .../failing-backup-restore.mdx | 28 ++++++++++++++++++- 1 file changed, 27 insertions(+), 1 deletion(-) diff --git a/serverless/sql-databases/troubleshooting/failing-backup-restore.mdx b/serverless/sql-databases/troubleshooting/failing-backup-restore.mdx index 52a1ca37d0..95814aa64b 100644 --- a/serverless/sql-databases/troubleshooting/failing-backup-restore.mdx +++ b/serverless/sql-databases/troubleshooting/failing-backup-restore.mdx @@ -28,4 +28,30 @@ These issues are caused by using `pg_dump` and `pg_restore` versions that are no To solve these issues, upgrade your `pg_dump` and/or `pg_restore` modules: - You can upgrade them by installing PostgreSQL 16 which includes these tools. -- If you are using a third-party tool that includes these libraries, upgrade your tool. For instance, pgAdmin supports PostgreSQL 16 ecosystem from version 7.8. \ No newline at end of file +- If you are using a third-party tool that includes these libraries, upgrade your tool. For instance, pgAdmin supports PostgreSQL 16 ecosystem from version 7.8. + +## Automated backup fails due to long running queries + +### Problem + +When performing long running queries (lasting several hours or days), backup operations might not be performed successfully. Backup status will then appear as `error` or `unknown_status`. + +### Cause + +These issues are caused by queries locking database rows (usually long running transactions), preventing logical backup operations to read database rows. + +### Solution + +To solve these issues, stop these queries: + +- List PostgreSQL processes and identify the ones running transactions since several hours ('xact_start' colmun) with the following command: + ``` + SELECT pid, state, usename, query, xact_start, query_start FROM pg_stat_activity ORDER BY xact_start; + ``` +- Stop the corresponding queries with: + ``` + SELECT pg_cancel_backend({pid}); + ``` + where `{pid}` is the process id from the long running query causing the issue (`pid` column of the previous step). + +Alternatively, you can also stop long running queries from Graphical PostgreSQL client such as [pgAdmin](https://www.pgadmin.org/) or [DBeaver](https://dbeaver.io/). From feabeadfa5d1bfc58b617627988d6ef9f207fcf1 Mon Sep 17 00:00:00 2001 From: nerda-codes <87707325+nerda-codes@users.noreply.github.com> Date: Thu, 31 Oct 2024 11:03:21 +0100 Subject: [PATCH 2/2] docs(review): review rowena Co-authored-by: Rowena Jones <36301604+RoRoJ@users.noreply.github.com> --- .../troubleshooting/failing-backup-restore.mdx | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/serverless/sql-databases/troubleshooting/failing-backup-restore.mdx b/serverless/sql-databases/troubleshooting/failing-backup-restore.mdx index 95814aa64b..c54a1aa309 100644 --- a/serverless/sql-databases/troubleshooting/failing-backup-restore.mdx +++ b/serverless/sql-databases/troubleshooting/failing-backup-restore.mdx @@ -30,21 +30,21 @@ To solve these issues, upgrade your `pg_dump` and/or `pg_restore` modules: - You can upgrade them by installing PostgreSQL 16 which includes these tools. - If you are using a third-party tool that includes these libraries, upgrade your tool. For instance, pgAdmin supports PostgreSQL 16 ecosystem from version 7.8. -## Automated backup fails due to long running queries +## Automated backup fails due to long-running queries ### Problem -When performing long running queries (lasting several hours or days), backup operations might not be performed successfully. Backup status will then appear as `error` or `unknown_status`. +When performing long-running queries (lasting several hours or days), backup operations might not be performed successfully. Backup status will then appear as `error` or `unknown_status`. ### Cause -These issues are caused by queries locking database rows (usually long running transactions), preventing logical backup operations to read database rows. +These issues are caused by queries locking database rows (usually long running transactions), preventing logical backup operations from reading database rows. ### Solution To solve these issues, stop these queries: -- List PostgreSQL processes and identify the ones running transactions since several hours ('xact_start' colmun) with the following command: +- List PostgreSQL processes and identify the ones that have been running transactions since several hours ('xact_start' colmun) with the following command: ``` SELECT pid, state, usename, query, xact_start, query_start FROM pg_stat_activity ORDER BY xact_start; ``` @@ -52,6 +52,6 @@ To solve these issues, stop these queries: ``` SELECT pg_cancel_backend({pid}); ``` - where `{pid}` is the process id from the long running query causing the issue (`pid` column of the previous step). + Where `{pid}` is the process id from the long-running query causing the issue (`pid` column of the previous step). -Alternatively, you can also stop long running queries from Graphical PostgreSQL client such as [pgAdmin](https://www.pgadmin.org/) or [DBeaver](https://dbeaver.io/). +Alternatively, you can also stop long-running queries using a graphical PostgreSQL client such as [pgAdmin](https://www.pgadmin.org/) or [DBeaver](https://dbeaver.io/).