Skip to content

Commit

Permalink
Rename force_parallel_mode to debug_parallel_query
Browse files Browse the repository at this point in the history
force_parallel_mode is meant to be used to allow us to exercise the
parallel query infrastructure to ensure that it's working as we expect.
It seems some users think this GUC is for forcing the query planner into
picking a parallel plan regardless of the costs.  A quick look at the
documentation would have made them realize that they were wrong, but the
GUC is likely too conveniently named which, evidently, seems to often
result in users expecting that it forces the planner into usefully
parallelizing queries.

Here we rename the GUC to something which casual users are less likely to
mistakenly think is what they need to make their query run more quickly.

For now, the old name can still be used.  We'll revisit if the old name
mapping can be removed once the buildfarm configs are all updated.

Reviewed-by: John Naylor
Discussion: https://postgr.es/m/CAApHDvrsOi92_uA7PEaHZMH-S4Xv+MGhQWA+GrP8b1kjpS1HjQ@mail.gmail.com
  • Loading branch information
david-rowley committed Feb 15, 2023
1 parent 8e0e069 commit 5352ca2
Show file tree
Hide file tree
Showing 23 changed files with 79 additions and 76 deletions.
8 changes: 4 additions & 4 deletions doc/src/sgml/config.sgml
Expand Up @@ -11091,17 +11091,17 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir'
</listitem>
</varlistentry>

<varlistentry id="guc-force-parallel-mode" xreflabel="force_parallel_mode">
<term><varname>force_parallel_mode</varname> (<type>enum</type>)
<varlistentry id="guc-debug-parallel-query" xreflabel="debug_parallel_query">
<term><varname>debug_parallel_query</varname> (<type>enum</type>)
<indexterm>
<primary><varname>force_parallel_mode</varname> configuration parameter</primary>
<primary><varname>debug_parallel_query</varname> configuration parameter</primary>
</indexterm>
</term>
<listitem>
<para>
Allows the use of parallel queries for testing purposes even in cases
where no performance benefit is expected.
The allowed values of <varname>force_parallel_mode</varname> are
The allowed values of <varname>debug_parallel_query</varname> are
<literal>off</literal> (use parallel mode only when it is expected to improve
performance), <literal>on</literal> (force parallel query for all queries
for which it is thought to be safe), and <literal>regress</literal> (like
Expand Down
2 changes: 1 addition & 1 deletion doc/src/sgml/regress.sgml
Expand Up @@ -370,7 +370,7 @@ make check LANG=C ENCODING=EUC_JP
set in the <varname>PGOPTIONS</varname> environment variable (for settings
that allow this):
<screen>
make check PGOPTIONS="-c force_parallel_mode=regress -c work_mem=50MB"
make check PGOPTIONS="-c debug_parallel_query=regress -c work_mem=50MB"
</screen>
When running against a temporary installation, custom settings can also be
set by supplying a pre-written <filename>postgresql.conf</filename>:
Expand Down
4 changes: 2 additions & 2 deletions src/backend/access/transam/parallel.c
Expand Up @@ -1152,11 +1152,11 @@ HandleParallelMessage(ParallelContext *pcxt, int i, StringInfo msg)
* If desired, add a context line to show that this is a
* message propagated from a parallel worker. Otherwise, it
* can sometimes be confusing to understand what actually
* happened. (We don't do this in FORCE_PARALLEL_REGRESS mode
* happened. (We don't do this in DEBUG_PARALLEL_REGRESS mode
* because it causes test-result instability depending on
* whether a parallel worker is actually used or not.)
*/
if (force_parallel_mode != FORCE_PARALLEL_REGRESS)
if (debug_parallel_query != DEBUG_PARALLEL_REGRESS)
{
if (edata.context)
edata.context = psprintf("%s\n%s", edata.context,
Expand Down
4 changes: 2 additions & 2 deletions src/backend/commands/explain.c
Expand Up @@ -756,8 +756,8 @@ ExplainPrintPlan(ExplainState *es, QueryDesc *queryDesc)
/*
* Sometimes we mark a Gather node as "invisible", which means that it's
* not to be displayed in EXPLAIN output. The purpose of this is to allow
* running regression tests with force_parallel_mode=regress to get the
* same results as running the same tests with force_parallel_mode=off.
* running regression tests with debug_parallel_query=regress to get the
* same results as running the same tests with debug_parallel_query=off.
* Such marking is currently only supported on a Gather at the top of the
* plan. We skip that node, and we must also hide per-worker detail data
* further down in the plan tree.
Expand Down
4 changes: 2 additions & 2 deletions src/backend/optimizer/plan/planmain.c
Expand Up @@ -114,12 +114,12 @@ query_planner(PlannerInfo *root,
* Anything parallel-restricted in the query tlist will be
* dealt with later.) This is normally pretty silly, because
* a Result-only plan would never be interesting to
* parallelize. However, if force_parallel_mode is on, then
* parallelize. However, if debug_parallel_query is on, then
* we want to execute the Result in a parallel worker if
* possible, so we must do this.
*/
if (root->glob->parallelModeOK &&
force_parallel_mode != FORCE_PARALLEL_OFF)
debug_parallel_query != DEBUG_PARALLEL_OFF)
final_rel->consider_parallel =
is_parallel_safe(root, parse->jointree->quals);

Expand Down
20 changes: 10 additions & 10 deletions src/backend/optimizer/plan/planner.c
Expand Up @@ -70,7 +70,7 @@

/* GUC parameters */
double cursor_tuple_fraction = DEFAULT_CURSOR_TUPLE_FRACTION;
int force_parallel_mode = FORCE_PARALLEL_OFF;
int debug_parallel_query = DEBUG_PARALLEL_OFF;
bool parallel_leader_participation = true;

/* Hook for plugins to get control in planner() */
Expand Down Expand Up @@ -364,20 +364,20 @@ standard_planner(Query *parse, const char *query_string, int cursorOptions,
* true during plan creation if a Gather or Gather Merge plan is actually
* created (cf. create_gather_plan, create_gather_merge_plan).
*
* However, if force_parallel_mode = on or force_parallel_mode = regress,
* then we impose parallel mode whenever it's safe to do so, even if the
* final plan doesn't use parallelism. It's not safe to do so if the
* query contains anything parallel-unsafe; parallelModeOK will be false
* in that case. Note that parallelModeOK can't change after this point.
* Otherwise, everything in the query is either parallel-safe or
* However, if debug_parallel_query = on or debug_parallel_query =
* regress, then we impose parallel mode whenever it's safe to do so, even
* if the final plan doesn't use parallelism. It's not safe to do so if
* the query contains anything parallel-unsafe; parallelModeOK will be
* false in that case. Note that parallelModeOK can't change after this
* point. Otherwise, everything in the query is either parallel-safe or
* parallel-restricted, and in either case it should be OK to impose
* parallel-mode restrictions. If that ends up breaking something, then
* either some function the user included in the query is incorrectly
* labeled as parallel-safe or parallel-restricted when in reality it's
* parallel-unsafe, or else the query planner itself has a bug.
*/
glob->parallelModeNeeded = glob->parallelModeOK &&
(force_parallel_mode != FORCE_PARALLEL_OFF);
(debug_parallel_query != DEBUG_PARALLEL_OFF);

/* Determine what fraction of the plan is likely to be scanned */
if (cursorOptions & CURSOR_OPT_FAST_PLAN)
Expand Down Expand Up @@ -431,7 +431,7 @@ standard_planner(Query *parse, const char *query_string, int cursorOptions,
* Optionally add a Gather node for testing purposes, provided this is
* actually a safe thing to do.
*/
if (force_parallel_mode != FORCE_PARALLEL_OFF && top_plan->parallel_safe)
if (debug_parallel_query != DEBUG_PARALLEL_OFF && top_plan->parallel_safe)
{
Gather *gather = makeNode(Gather);

Expand All @@ -449,7 +449,7 @@ standard_planner(Query *parse, const char *query_string, int cursorOptions,
gather->plan.righttree = NULL;
gather->num_workers = 1;
gather->single_copy = true;
gather->invisible = (force_parallel_mode == FORCE_PARALLEL_REGRESS);
gather->invisible = (debug_parallel_query == DEBUG_PARALLEL_REGRESS);

/*
* Since this Gather has no parallel-aware descendants to signal to,
Expand Down
1 change: 1 addition & 0 deletions src/backend/utils/misc/guc.c
Expand Up @@ -186,6 +186,7 @@ static const unit_conversion time_unit_conversion_table[] =
static const char *const map_old_guc_names[] = {
"sort_mem", "work_mem",
"vacuum_mem", "maintenance_work_mem",
"force_parallel_mode", "debug_parallel_query",
NULL
};

Expand Down
32 changes: 17 additions & 15 deletions src/backend/utils/misc/guc_tables.c
Expand Up @@ -360,16 +360,16 @@ static const struct config_enum_entry recovery_prefetch_options[] = {
{NULL, 0, false}
};

static const struct config_enum_entry force_parallel_mode_options[] = {
{"off", FORCE_PARALLEL_OFF, false},
{"on", FORCE_PARALLEL_ON, false},
{"regress", FORCE_PARALLEL_REGRESS, false},
{"true", FORCE_PARALLEL_ON, true},
{"false", FORCE_PARALLEL_OFF, true},
{"yes", FORCE_PARALLEL_ON, true},
{"no", FORCE_PARALLEL_OFF, true},
{"1", FORCE_PARALLEL_ON, true},
{"0", FORCE_PARALLEL_OFF, true},
static const struct config_enum_entry debug_parallel_query_options[] = {
{"off", DEBUG_PARALLEL_OFF, false},
{"on", DEBUG_PARALLEL_ON, false},
{"regress", DEBUG_PARALLEL_REGRESS, false},
{"true", DEBUG_PARALLEL_ON, true},
{"false", DEBUG_PARALLEL_OFF, true},
{"yes", DEBUG_PARALLEL_ON, true},
{"no", DEBUG_PARALLEL_OFF, true},
{"1", DEBUG_PARALLEL_ON, true},
{"0", DEBUG_PARALLEL_OFF, true},
{NULL, 0, false}
};

Expand Down Expand Up @@ -4852,13 +4852,15 @@ struct config_enum ConfigureNamesEnum[] =
},

{
{"force_parallel_mode", PGC_USERSET, DEVELOPER_OPTIONS,
gettext_noop("Forces use of parallel query facilities."),
gettext_noop("If possible, run query using a parallel worker and with parallel restrictions."),
{"debug_parallel_query", PGC_USERSET, DEVELOPER_OPTIONS,
gettext_noop("Forces the planner's use parallel query nodes."),
gettext_noop("This can be useful for testing the parallel query infrastructure "
"by forcing the planner to generate plans which contains nodes "
"which perform tuple communication between workers and the main process."),
GUC_NOT_IN_SAMPLE | GUC_EXPLAIN
},
&force_parallel_mode,
FORCE_PARALLEL_OFF, force_parallel_mode_options,
&debug_parallel_query,
DEBUG_PARALLEL_OFF, debug_parallel_query_options,
NULL, NULL, NULL
},

Expand Down
12 changes: 6 additions & 6 deletions src/include/optimizer/optimizer.h
Expand Up @@ -99,16 +99,16 @@ extern bool is_pseudo_constant_for_index(PlannerInfo *root, Node *expr,

/* in plan/planner.c: */

/* possible values for force_parallel_mode */
/* possible values for debug_parallel_query */
typedef enum
{
FORCE_PARALLEL_OFF,
FORCE_PARALLEL_ON,
FORCE_PARALLEL_REGRESS
} ForceParallelMode;
DEBUG_PARALLEL_OFF,
DEBUG_PARALLEL_ON,
DEBUG_PARALLEL_REGRESS
} DebugParallelMode;

/* GUC parameters */
extern PGDLLIMPORT int force_parallel_mode;
extern PGDLLIMPORT int debug_parallel_query;
extern PGDLLIMPORT bool parallel_leader_participation;

extern struct PlannedStmt *planner(Query *parse, const char *query_string,
Expand Down
2 changes: 1 addition & 1 deletion src/pl/plpgsql/src/pl_exec.c
Expand Up @@ -8066,7 +8066,7 @@ exec_save_simple_expr(PLpgSQL_expr *expr, CachedPlan *cplan)

/*
* Ordinarily, the plan node should be a simple Result. However, if
* force_parallel_mode is on, the planner might've stuck a Gather node
* debug_parallel_query is on, the planner might've stuck a Gather node
* atop that. The simplest way to deal with this is to look through the
* Gather node. The Gather node's tlist would normally contain a Var
* referencing the child node's output, but it could also be a Param, or
Expand Down
6 changes: 3 additions & 3 deletions src/test/isolation/expected/deadlock-parallel.out
Expand Up @@ -15,21 +15,21 @@ lock_share

step e1l: SELECT lock_excl(1,x) FROM bigt LIMIT 1; <waiting ...>
step e2l: SELECT lock_excl(2,x) FROM bigt LIMIT 1; <waiting ...>
step d1a2: SET force_parallel_mode = on;
step d1a2: SET debug_parallel_query = on;
SET parallel_setup_cost = 0;
SET parallel_tuple_cost = 0;
SET min_parallel_table_scan_size = 0;
SET parallel_leader_participation = off;
SET max_parallel_workers_per_gather = 3;
SELECT sum(lock_share(2,x)) FROM bigt; <waiting ...>
step d2a1: SET force_parallel_mode = on;
step d2a1: SET debug_parallel_query = on;
SET parallel_setup_cost = 0;
SET parallel_tuple_cost = 0;
SET min_parallel_table_scan_size = 0;
SET parallel_leader_participation = off;
SET max_parallel_workers_per_gather = 3;
SELECT sum(lock_share(1,x)) FROM bigt;
SET force_parallel_mode = off;
SET debug_parallel_query = off;
RESET parallel_setup_cost;
RESET parallel_tuple_cost;
SELECT lock_share(3,x) FROM bigt LIMIT 1; <waiting ...>
Expand Down
16 changes: 8 additions & 8 deletions src/test/isolation/specs/deadlock-parallel.spec
Expand Up @@ -6,7 +6,7 @@
# and is incorrectly marked parallel-safe so that it can execute in a worker.

# Note that we explicitly override any global settings of isolation level
# or force_parallel_mode, to ensure we're testing what we intend to.
# or debug_parallel_query, to ensure we're testing what we intend to.

# Otherwise, this is morally equivalent to deadlock-soft.spec:
# Four-process deadlock with two hard edges and two soft edges.
Expand Down Expand Up @@ -55,13 +55,13 @@ teardown

session d1
setup { BEGIN isolation level repeatable read;
SET force_parallel_mode = off;
SET debug_parallel_query = off;
SET deadlock_timeout = '10s';
}
# these locks will be taken in the leader, so they will persist:
step d1a1 { SELECT lock_share(1,x), lock_excl(3,x) FROM bigt LIMIT 1; }
# this causes all the parallel workers to take locks:
step d1a2 { SET force_parallel_mode = on;
step d1a2 { SET debug_parallel_query = on;
SET parallel_setup_cost = 0;
SET parallel_tuple_cost = 0;
SET min_parallel_table_scan_size = 0;
Expand All @@ -72,29 +72,29 @@ step d1c { COMMIT; }

session d2
setup { BEGIN isolation level repeatable read;
SET force_parallel_mode = off;
SET debug_parallel_query = off;
SET deadlock_timeout = '10ms';
}
# this lock will be taken in the leader, so it will persist:
step d2a2 { select lock_share(2,x) FROM bigt LIMIT 1; }
# this causes all the parallel workers to take locks;
# after which, make the leader take lock 3 to prevent client-driven deadlock
step d2a1 { SET force_parallel_mode = on;
step d2a1 { SET debug_parallel_query = on;
SET parallel_setup_cost = 0;
SET parallel_tuple_cost = 0;
SET min_parallel_table_scan_size = 0;
SET parallel_leader_participation = off;
SET max_parallel_workers_per_gather = 3;
SELECT sum(lock_share(1,x)) FROM bigt;
SET force_parallel_mode = off;
SET debug_parallel_query = off;
RESET parallel_setup_cost;
RESET parallel_tuple_cost;
SELECT lock_share(3,x) FROM bigt LIMIT 1; }
step d2c { COMMIT; }

session e1
setup { BEGIN isolation level repeatable read;
SET force_parallel_mode = on;
SET debug_parallel_query = on;
SET deadlock_timeout = '10s';
}
# this lock will be taken in a parallel worker, but we don't need it to persist
Expand All @@ -103,7 +103,7 @@ step e1c { COMMIT; }

session e2
setup { BEGIN isolation level repeatable read;
SET force_parallel_mode = on;
SET debug_parallel_query = on;
SET deadlock_timeout = '10s';
}
# this lock will be taken in a parallel worker, but we don't need it to persist
Expand Down
2 changes: 1 addition & 1 deletion src/test/isolation/specs/serializable-parallel.spec
Expand Up @@ -35,7 +35,7 @@ step s2c { COMMIT; }
session s3
setup {
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SET force_parallel_mode = on;
SET debug_parallel_query = on;
}
step s3r { SELECT id, balance FROM bank_account WHERE id IN ('X', 'Y') ORDER BY id; }
step s3c { COMMIT; }
Expand Down
4 changes: 2 additions & 2 deletions src/test/modules/libpq_pipeline/libpq_pipeline.c
Expand Up @@ -1760,9 +1760,9 @@ main(int argc, char **argv)
res = PQexec(conn, "SET lc_messages TO \"C\"");
if (PQresultStatus(res) != PGRES_COMMAND_OK)
pg_fatal("failed to set lc_messages: %s", PQerrorMessage(conn));
res = PQexec(conn, "SET force_parallel_mode = off");
res = PQexec(conn, "SET debug_parallel_query = off");
if (PQresultStatus(res) != PGRES_COMMAND_OK)
pg_fatal("failed to set force_parallel_mode: %s", PQerrorMessage(conn));
pg_fatal("failed to set debug_parallel_query: %s", PQerrorMessage(conn));

/* Set the trace file, if requested */
if (tracefile != NULL)
Expand Down
2 changes: 1 addition & 1 deletion src/test/modules/test_oat_hooks/test_oat_hooks.c
Expand Up @@ -233,7 +233,7 @@ emit_audit_message(const char *type, const char *hook, char *action, char *objNa
/*
* Ensure that audit messages are not duplicated by only emitting them
* from a leader process, not a worker process. This makes the test
* results deterministic even if run with force_parallel_mode = regress.
* results deterministic even if run with debug_parallel_query = regress.
*/
if (REGRESS_audit && !IsParallelWorker())
{
Expand Down
2 changes: 1 addition & 1 deletion src/test/regress/expected/multirangetypes.out
Expand Up @@ -3257,7 +3257,7 @@ select array[1,3] <@ arraymultirange(arrayrange(array[1,2], array[2,1]));
--
create type two_ints as (a int, b int);
create type two_ints_range as range (subtype = two_ints);
-- with force_parallel_mode on, this exercises tqueue.c's range remapping
-- with debug_parallel_query on, this exercises tqueue.c's range remapping
select *, row_to_json(upper(t)) as u from
(values (two_ints_multirange(two_ints_range(row(1,2), row(3,4)))),
(two_ints_multirange(two_ints_range(row(5,6), row(7,8))))) v(t);
Expand Down
2 changes: 1 addition & 1 deletion src/test/regress/expected/rangetypes.out
Expand Up @@ -1743,7 +1743,7 @@ select array[1,3] <@ arrayrange(array[1,2], array[2,1]);
--
create type two_ints as (a int, b int);
create type two_ints_range as range (subtype = two_ints);
-- with force_parallel_mode on, this exercises tqueue.c's range remapping
-- with debug_parallel_query on, this exercises tqueue.c's range remapping
select *, row_to_json(upper(t)) as u from
(values (two_ints_range(row(1,2), row(3,4))),
(two_ints_range(row(5,6), row(7,8)))) v(t);
Expand Down

0 comments on commit 5352ca2

Please sign in to comment.