Skip to content

Commit

Permalink
Consider triggering VACUUM failsafe during scan.
Browse files Browse the repository at this point in the history
The wraparound failsafe mechanism added by commit 1e55e7d handled the
one-pass strategy case (i.e. the "table has no indexes" case) by adding
a dedicated failsafe check.  This made up for the fact that the usual
one-pass checks inside lazy_vacuum_all_indexes() cannot ever be reached
during a one-pass strategy VACUUM.

This approach failed to account for two-pass VACUUMs that opt out of
index vacuuming up-front.  The INDEX_CLEANUP off case in the only case
that works like that.

Fix this by performing a failsafe check every 4GB during the first scan
of the heap, regardless of the details of the VACUUM.  This eliminates
the special case, and will make the failsafe trigger more reliably.

Author: Peter Geoghegan <pg@bowt.ie>
Reported-By: Andres Freund <andres@anarazel.de>
Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/20210424002921.pb3t7h6frupdqnkp@alap3.anarazel.de
  • Loading branch information
petergeoghegan committed May 25, 2021
1 parent 713a431 commit c242baa
Showing 1 changed file with 19 additions and 24 deletions.
43 changes: 19 additions & 24 deletions src/backend/access/heap/vacuumlazy.c
Expand Up @@ -110,10 +110,9 @@
#define BYPASS_THRESHOLD_PAGES 0.02 /* i.e. 2% of rel_pages */

/*
* When a table is small (i.e. smaller than this), save cycles by avoiding
* repeated failsafe checks
* Perform a failsafe check every 4GB during the heap scan, approximately
*/
#define FAILSAFE_MIN_PAGES \
#define FAILSAFE_EVERY_PAGES \
((BlockNumber) (((uint64) 4 * 1024 * 1024 * 1024) / BLCKSZ))

/*
Expand Down Expand Up @@ -890,6 +889,7 @@ lazy_scan_heap(LVRelState *vacrel, VacuumParams *params, bool aggressive)
BlockNumber nblocks,
blkno,
next_unskippable_block,
next_failsafe_block,
next_fsm_block_to_vacuum;
PGRUsage ru0;
Buffer vmbuffer = InvalidBuffer;
Expand Down Expand Up @@ -919,6 +919,7 @@ lazy_scan_heap(LVRelState *vacrel, VacuumParams *params, bool aggressive)

nblocks = RelationGetNumberOfBlocks(vacrel->rel);
next_unskippable_block = 0;
next_failsafe_block = 0;
next_fsm_block_to_vacuum = 0;
vacrel->rel_pages = nblocks;
vacrel->scanned_pages = 0;
Expand Down Expand Up @@ -1130,6 +1131,20 @@ lazy_scan_heap(LVRelState *vacrel, VacuumParams *params, bool aggressive)

vacuum_delay_point();

/*
* Regularly check if wraparound failsafe should trigger.
*
* There is a similar check inside lazy_vacuum_all_indexes(), but
* relfrozenxid might start to look dangerously old before we reach
* that point. This check also provides failsafe coverage for the
* one-pass strategy case.
*/
if (blkno - next_failsafe_block >= FAILSAFE_EVERY_PAGES)
{
lazy_check_wraparound_failsafe(vacrel);
next_failsafe_block = blkno;
}

/*
* Consider if we definitely have enough space to process TIDs on page
* already. If we are close to overrunning the available space for
Expand Down Expand Up @@ -1375,17 +1390,12 @@ lazy_scan_heap(LVRelState *vacrel, VacuumParams *params, bool aggressive)
* Periodically perform FSM vacuuming to make newly-freed
* space visible on upper FSM pages. Note we have not yet
* performed FSM processing for blkno.
*
* Call lazy_check_wraparound_failsafe() here, too, since we
* also don't want to do that too frequently, or too
* infrequently.
*/
if (blkno - next_fsm_block_to_vacuum >= VACUUM_FSM_EVERY_PAGES)
{
FreeSpaceMapVacuumRange(vacrel->rel, next_fsm_block_to_vacuum,
blkno);
next_fsm_block_to_vacuum = blkno;
lazy_check_wraparound_failsafe(vacrel);
}

/*
Expand Down Expand Up @@ -2558,7 +2568,6 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup, LVRelState *vacrel)
/*
* Trigger the failsafe to avoid wraparound failure when vacrel table has a
* relfrozenxid and/or relminmxid that is dangerously far in the past.
*
* Triggering the failsafe makes the ongoing VACUUM bypass any further index
* vacuuming and heap vacuuming. Truncating the heap is also bypassed.
*
Expand All @@ -2567,24 +2576,10 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup, LVRelState *vacrel)
* that it started out with.
*
* Returns true when failsafe has been triggered.
*
* Caller is expected to call here before and after vacuuming each index in
* the case of two-pass VACUUM, or every VACUUM_FSM_EVERY_PAGES blocks in the
* case of no-indexes/one-pass VACUUM.
*
* There is also a precheck before the first pass over the heap begins, which
* is helpful when the failsafe initially triggers during a non-aggressive
* VACUUM -- the automatic aggressive vacuum to prevent wraparound that
* follows can independently trigger the failsafe right away.
*/
static bool
lazy_check_wraparound_failsafe(LVRelState *vacrel)
{
/* Avoid calling vacuum_xid_failsafe_check() very frequently */
if (vacrel->num_index_scans == 0 &&
vacrel->rel_pages <= FAILSAFE_MIN_PAGES)
return false;

/* Don't warn more than once per VACUUM */
if (vacrel->do_failsafe)
return true;
Expand All @@ -2600,7 +2595,7 @@ lazy_check_wraparound_failsafe(LVRelState *vacrel)
vacrel->do_failsafe = true;

ereport(WARNING,
(errmsg("abandoned index vacuuming of table \"%s.%s.%s\" as a failsafe after %d index scans",
(errmsg("bypassing nonessential maintenance of table \"%s.%s.%s\" as a failsafe after %d index scans",
get_database_name(MyDatabaseId),
vacrel->relnamespace,
vacrel->relname,
Expand Down

0 comments on commit c242baa

Please sign in to comment.